AI Profiling Risk

Credit: Image generated by the editorial team using AI for illustrative purposes.

Large language models (LLMs), the computational models underpinning the functioning of ChatGPT, Gemini, and similar conversational platforms, are now used daily by many people worldwide. As these models can rapidly answer queries about most topics, many users use them to source information related to their personal and professional lives, sometimes sharing information about themselves.

Researchers at ETH Zurich recently set out to better understand the extent to which people’s interactions with LLMs could be used to infer their personality traits. The results of their study, posted to the arXiv preprint server, suggest that AI can predict widely established aspects of people’s personality from users’ ChatGPT chat history, with striking accuracy.

“Our main aim was to quantify in what way and to what extent personal data, and especially personality traits, can be inferred from how people interact with LLM-based systems as conversational agents (e.g., ChatGPT, Claude),” Noé Zufferey, senior author of the paper, told Tech Xplore.

“This is important to evaluate the risks related to the potential misuse of such AI agents for large-scale profiling, influence, and manipulation. As the main worldwide used AI systems are owned by private companies that have self-interests and specific political agendas, the evaluation of such risks is highly important to understand how it could be harmful for democratic societies.”

Can AI infer our personality traits from our ChatGPT chat history?

Credit: Cögendez, Zimmermann, Zufferey.

As platforms like ChatGPT are now widely used, delineating the privacy- and security-related risks that come with their use is of the utmost importance. This can ultimately help to develop strategies and tools to address and mitigate these risks.

“For instance, AI agents could be leveraged for mass surveillance and massive propaganda campaigns that would make extensive use of personalized user-AI interaction based on the large amount of personal data they have at their disposal,” said Zufferey.

“Some companies and governments even directly aim for this (see Palantir and their recently published manifesto). It is known that many people use AI agents as an informational interface, a virtual friend, and even a personal coach, tutor, and therapist.

“There are many doors that can be used by the AI-service provider to directly get into the user’s mind, especially as some people are inclined to the so-called ‘cognitive surrender,’ i.e., the tendency of individuals to rely on AI rather than on their own thoughts and opinions.”

Can AI infer our personality traits from our ChatGPT chat history?

Credit: Cögendez, Zimmermann, Zufferey.

Analyzing the chat history of hundreds of users

As part of their study, Zufferey and his colleagues asked 668 ChatGPT users to share copies of their chat history with them. They then trained artificial intelligence (AI) to infer personality traits from these chats.

The researchers collected and analyzed around 62,000 chats, categorizing conversations based on the topics they focused on. By analyzing the chats, the AI model tried to infer five personality traits that are widely assessed as part of psychology research, also known as the “Big Five.”

These traits are extraversion, agreeableness, conscientiousness (i.e., sense of responsibility and dependability), neuroticism (i.e., emotional stability) and openness to experience (i.e., curiosity and willingness to try new things).

“We recruited hundreds of ChatGPT users and asked them to, on the one hand, send us a copy of the ChatGPT history (logs of all their past interactions), and, on the other hand, answer a personality test allowing us to evaluate their traits (according to the well-known OCEAN model, aka Big Five),” explained Zufferey.

“This dataset was then used to train AI models and evaluate how performant they are in guessing users’ personality traits based on their interaction with ChatGPT.”

Interestingly, the researchers found that from ChatGPT interactions, their AI model could predict people’s personality traits, which were also separately assessed in a standard psychological test. For some traits, such as extraversion and neuroticism, the model’s predictions appeared to be more accurate than for others.

“What we learned is that specific use cases and interactions based on specific data types can lead to the inference of specific traits,” said Zufferey.

“In other words, the information a model gets about your personality depends on the type of interaction you have and the type of data you disclose. For example, whereas people using AI agents to discuss their relationships are more likely to have their extraversion level correctly inferred, people who talk about religion to an AI agent put their conscientiousness level at greater risk.”

Informing the development of privacy-enhancing techniques

This study highlights the privacy risks associated with the daily use of LLM-based platforms, showing that users’ chats could easily be used to create a general psychological profile of them. Notably, the team observed that even casual and seemingly non-personal chats often contain data that could be used to predict one’s personality traits.

Moreover, the researchers found that, generally, the more people interact with AI agents, the easier it is to correctly infer their personality traits from their chat history. In the future, their work could inspire the development of new privacy-preserving LLM designs or other tools that prevent users from being profiled by LLM-based platforms.

“For example, local functionalities can be developed to filter information upstream (i.e., before it is ‘sent’ to the AI agent), for example, following opt-in privacy settings,” added Zufferey.

“In our next studies, we will explore more of the risks for privacy and society related to the use of AI agents. For example, by analyzing diverse specific systems, other types of interactions, and more concrete threats. In addition, we will focus on developing privacy-enhancing tools that aim to maximize privacy with minimum impact on systems’ performance.”

Written for you by our author Ingrid Fadelli, edited by Sadie Harley, and fact-checked and reviewed by Robert Egan—this article is the result of careful human work. We rely on readers like you to keep independent science journalism alive.
If this reporting matters to you, please consider a donation (especially monthly). You’ll get an ad-free account as a thank-you.

Publication details

Derya Cögendez et al, Can LLMs Infer Conversational Agent Users’ Personality Traits from Chat History?, arXiv (2026). DOI: 10.48550/arxiv.2604.19785

Journal information:
arXiv

Key concepts
Generative AI platforms

© 2026 Science X Network

Citation:
Can AI ascertain our personality traits from our ChatGPT history? (2026, May 5)
retrieved 5 May 2026
from https://techxplore.com/news/2026-05-ai-personality-traits-chatgpt-history.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.