OpenAI is hiring an executive focused on AI-related risks to mental health and computer security.
Writing in an X post Saturday (Dec. 27), CEO Sam Altman said that the new “Head of Preparedness” role comes amid the rise of new challenges related to artificial intelligence (AI).
“The potential impact of models on mental health was something we saw a preview of in 2025; we are just now seeing models get so good at computer security they are beginning to find critical vulnerabilities,” Altman wrote.
His comments were flagged in a report by TechCrunch, which also noted that the company’s listing for the job describes the role as being in charge of preparing the company’s framework to explain its “approach to tracking and preparing for frontier capabilities that create new risks of severe harm.”
The report added that OpenAI first launched a preparedness team in 2023, saying it would be charged with studying potential “catastrophic risks,” ranging from immediate threats like phishing or theoretical issues like nuclear attacks.
However, TechCrunch added the company has since reassigned Head of Preparedness Aleksander Madry to a job focused on AI reasoning and seen other safety executives depart the startup or take on different roles unrelated to safety.
Advertisement: Scroll to Continue
The news comes weeks after OpenAI said it would add new safeguards to its AI models in response to rapid advancements across the industry. Those developments, the company said, create benefits for cyberdefense, while bringing dual-use risks. That means they could be used for malicious purposes as well as defensive ones.
To illustrate the advancements in AI models, the company on its blog cited capture-the-flag challenges that showed the capabilities of OpenAI’s models improving from 27% on GPT-5 in August to 76% on GPT-5.1-Codex-Max in November.
“We expect that upcoming AI models will continue on this trajectory; in preparation, we are planning and evaluating as though each new model could reach ‘High’ levels of cybersecurity capability, as measured by our Preparedness Framework,” OpenAI wrote.
PYMNTS reported last month that AI has become both a tool and a target for cybersecurity.
The PYMNTS Intelligence report “From Spark to Strategy: How Product Leaders Are Using GenAI to Gain a Competitive Edge” found that a little more than three-quarters (77%) of chief product officers using generative AI for cybersecurity said it still requires human oversight.
OpenAI also instituted new parental controls earlier this year and announced plans for an automated age-prediction system. PYMNTS reported at the time that the new measures followed a lawsuit from the parents of a teenager who died by suicide, accusing OpenAI’s ChatGPT chatbot of encouraging the boy’s actions.
For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.