The Business & Technology Network
Helping Business Interpret and Use Technology
S M T W T F S
 
 
 
 
1
 
2
 
3
 
4
 
5
 
6
 
7
 
8
 
9
 
 
 
 
 
 
 
 
 
 
19
 
20
 
21
 
22
 
23
 
24
 
25
 
26
 
27
 
28
 
29
 
30
 
31
 

OpenAI offers 555k salary for stressful head of preparedness role

DATE POSTED:December 29, 2025
OpenAI offers 555k salary for stressful head of preparedness role

OpenAI has initiated a search for a new “head of preparedness” to manage artificial intelligence risks, a position offering an annual salary of $555,000 plus equity, Business Insider reports.

CEO Sam Altman described the role as “stressful” in an X post on Saturday, emphasizing its critical nature given the rapid improvements and emerging challenges presented by AI models.

The company seeks to mitigate potential downsides of AI, which include job displacement, misinformation, malicious use, environmental impact, and the erosion of human agency. Altman noted that while models are capable of beneficial applications, they are also beginning to pose challenges such as impacts on mental health and the ability to identify critical cybersecurity vulnerabilities, referencing previewed issues from 2025 and current capabilities.

ChatGPT, OpenAI’s AI chatbot, has gained popularity among consumers for general tasks like research and drafting emails. However, some users have engaged with the bots as an alternative to therapy, which, in certain instances, has exacerbated mental health issues, contributing to delusions and other concerning behaviors. OpenAI stated in October it was collaborating with mental health professionals to enhance ChatGPT’s interactions with users exhibiting concerning behavior, including psychosis or self-harm.

OpenAI’s founding mission centers on developing AI to benefit humanity, with safety protocols established early in its operations. Former staffers have indicated that the company’s focus shifted towards profitability over safety as products were released.

Jan Leiki, former leader of OpenAI’s dissolved safety team, resigned in May 2024, stating on X that the company had “lost sight of its mission to ensure the technology is deployed safely.” Leiki articulated that building “smarter-than-human machines is an inherently dangerous endeavor” and expressed concerns that “safety culture and processes have taken a backseat to shiny products.” Another staffer resigned less than a week later, citing similar safety concerns.

Daniel Kokotajlo, another former staffer, resigned in May 2024, citing a “losing confidence” in OpenAI’s responsible behavior concerning Artificial General Intelligence (AGI). Kokotajlo later told Fortune that the number of personnel researching AGI-related safety issues had been nearly halved from an initial count of about 30.

Aleksander Madry, the prior head of preparedness, transitioned to a new role in July 2024. The head of preparedness position, part of OpenAI’s Safety Systems team, focuses on developing safeguards, frameworks, and evaluations for the company’s models. The job listing specifies responsibilities including “building and coordinating capability evaluations, threat models, and mitigations that form a coherent, rigorous, and operationally scalable safety pipeline.”

Featured image credit