nPloy Logo

Quantitative Threat Forecasting Analyst

Logo of OpenAI

OpenAI

Hybrid

Hybrid

Regular employment

5 - 15 years of experience

Full Time

San Francisco, United States

Responsibilities

About the Team

The Intelligence and Investigations team seeks to rapidly identify and mitigate abuse and strategic risks to ensure a safe online ecosystem. We are dedicated to identifying emerging abuse trends, analyzing risks, and working with our internal and external partners to implement effective mitigation strategies to protect against misuse. Our efforts contribute to OpenAI's overarching goal of developing AI that benefits humanity.

We’re looking for a world-class quantitative analyst to build the predictive backbone of this mission—someone who thrives on modeling ambiguity, forecasting high-stakes outcomes, and translating messy, sparse, or fast-moving data into decision-ready insight.

 

About the Role

As a Quantitative Threat Forecasting Analyst, you’ll design and deploy statistical models that forecast threat emergence, detect anomalies, and quantify risk—often when signal is weak, timelines are short, and the stakes are high. Your work will power both tactical responses to abuse and strategic decisions about how we evolve our safety detection, investigation and analysis systems.

This is a rare opportunity to apply advanced statistical modeling, risk analytics, and real-world inference to one of the most consequential safety challenges of our time.

This role is based in San Francisco, CA. We use a hybrid work model of 3 days in the office per week and offer relocation assistance to new employees.

In this role, you will:

  • Design probabilistic & Bayesian models using PyMC, NumPyro (JAX‑accelerated HMC/NUTS) and TensorFlow Probability to capture uncertainty at scale.

  • Build classical and deep‑learning forecasts with statsmodels baselines, plus state‑of‑the‑art libraries like Darts, GluonTS, Chronos, sktime and Nixtla’s MLForecast for multivariate or long‑horizon time‑series problems. 

  • Develop real‑time anomaly‑detection pipelines leveraging PyOD 2.0 for GPU‑ready detectors and River for streaming/online ML on telemetry data. 

  • Apply survival‑analysis and rare‑event methods (e.g., Cox PH, random‑survival‑forests, DeepSurv) via scikit‑survival to model threat lifecycles and hazard rates. 

  • Run stress tests & Monte Carlo simulations to evaluate the likelihood and impact of low‑frequency, high‑severity threats; translate findings into resilient safety‑engineering requirements.

  • Collaborate across disciplines—investigations, engineering, policy—to embed statistical rigor into threat prioritization, guardrails, and product decisions.

  • Communicate insights through clear briefs, dashboards, and visualizations that drive executive action.

  • Own production pipelines in Python/JAX/PyTorch or R, using SQL or Spark‑like engines (DuckDB, BigQuery, Snowflake) and GPU/TPU acceleration where appropriate.\

You might thrive in this role if you:

  • 5+ years experience in a quantitative research, forecasting, or risk modeling role in finance, tech, safety, security, or public policy

  • Deep fluency in statistical inference, forecasting, uncertainty quantification, and decision modeling—especially under sparse or adversarial data conditions.

  • Demonstrated impact: you’ve shipped models that directly informed capital allocation, fraud prevention, incident response, or safety interventions.

  • Expertise with modern toolchains—NumPyro, TensorFlow Probability, PyMC, Darts, GluonTS/Chronos, sktime, PyOD 2.0, River, scikit‑survival—and readiness to evaluate emerging libraries as the field evolves. 

  • Strong coding skills (Python/JAX/PyTorch or R) and data‑engineering fundamentals (SQL, Spark, data warehousing).

  • Crisp communicator able to influence multidisciplinary partners and executives.

  • Comfort navigating imperfect data and prioritizing under uncertainty in a rapidly changing threat landscape.

About OpenAI

OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity. 

We are an equal opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristic.

For additional information, please see OpenAI’s Affirmative Action and Equal Employment Opportunity Policy Statement.

Qualified applicants with arrest or conviction records will be considered for employment in accordance with applicable law, including the San Francisco Fair Chance Ordinance, the Los Angeles County Fair Chance Ordinance for Employers, and the California Fair Chance Act. For unincorporated Los Angeles County workers: we reasonably believe that criminal history may have a direct, adverse and negative relationship with the following job duties, potentially resulting in the withdrawal of a conditional offer of employment: protect computer hardware entrusted to you from theft, loss or damage; return all computer hardware in your possession (including the data contained therein) upon termination of employment or end of assignment; and maintain the confidentiality of proprietary, confidential, and non-public information. In addition, job duties require access to secure and protected information technology systems and related data security obligations.

We are committed to providing reasonable accommodations to applicants with disabilities, and requests can be made via this link.

OpenAI Global Applicant Privacy Policy

At OpenAI, we believe artificial intelligence has the potential to help people solve immense global challenges, and we want the upside of AI to be widely shared. Join us in shaping the future of technology.

Required skills

Big Data
Data Analysis
Engineering
Forecasting
Machine Learning
Problem Solving
Python
Quantitative Analytics
R
Risk Analysis
SQL
Statistical Analysis
XML
Pipelines
stress test
CentOS
UIKit
CouchDB
pyUnit
GCS
Technical Communication Skills
Data Visualization
Data Warehouse
Incident detection
CanOE
Data Modeling
Snowflake
Apache Spark
credit risk modeling
PoC
Flutter
Adaptability
Data Engineering
colaboration with stakeholders
Accelerated Root Cause
Deep Learning
Pytorch
Tensorflow
Numpy
Experience with electronic circuits simulation
Dashboard management
English
Job posted 1 day ago

or

to apply.