{"id":15188,"date":"2026-04-24T08:21:11","date_gmt":"2026-04-24T08:21:11","guid":{"rendered":"https:\/\/www.europesays.com\/ai\/15188\/"},"modified":"2026-04-24T08:21:11","modified_gmt":"2026-04-24T08:21:11","slug":"personality-identity-and-artificial-intelligence-a-grand-challenge","status":"publish","type":"post","link":"https:\/\/www.europesays.com\/ai\/15188\/","title":{"rendered":"Personality, identity, and Artificial Intelligence: a grand challenge"},"content":{"rendered":"<p>Modern Artificial Intelligence (AI) is transforming personality research. Personality is expressed in a world in which individual aspirations and social interactions are increasingly mediated by AI, either overtly or covertly (<a class=\"ArticleReference\" href=\"#B16\" id=\"B16a\" data-event=\"articleReference-a-b16\">Matthews et al., 2024<\/a>). There are likely to be dynamic, reciprocal relationships between personality and technology usage. Personality factors such as trust and technophobia influence engagement with AI in work and leisure settings. Conversely, interactions with AI may feed back into stable attitudes, emotional dispositions, and social identities related to AI. Wider societal concerns about the impact of digital technologies such as smartphones on personality and social functioning are heightened by AI. In this Grand Challenge article, we briefly review some emerging research areas and topics that will advance our understanding of how personality and identity interact with AI in the context of technology-driven cultural change.<\/p>\n<p>Previous research has addressed both granular and high-level, \u201cbig picture\u201d issues (<a class=\"ArticleReference\" href=\"#B17\" id=\"B17a\" data-event=\"articleReference-a-b17\">Matthews et al., 2021<\/a>). Much extant personality research is granular: it explores how traits influence subjective experience and behavior in well-defined interactions with the physical or social environment. Studies of individual differences in interactions with specific digital systems, such as decision aids and chatbots, can promote constructive human-AI interactions by personalizing the technology to support individuals. The social context is also critical\u2014for example, AI systems are increasingly participating and interacting with social media as autonomous agents. Joint human-AI online communities can have both beneficial and harmful effects, as they can provide large-scale, effective emotional support and widen access to information (<a class=\"ArticleReference\" href=\"#B20\" id=\"B20a\" data-event=\"articleReference-a-b20\">O&#8217;Leary, 2023<\/a>). However, they can also create risks as bots can amplify the effects of low-credibility content, exploit confirmation bias, and trigger \u201cinformation disorders\u201d and extremist dynamics through network effects (<a class=\"ArticleReference\" href=\"#B3\" id=\"B3a\" data-event=\"articleReference-a-b3\">Betts et al., 2026<\/a>; <a class=\"ArticleReference\" href=\"#B26\" id=\"B26a\" data-event=\"articleReference-a-b26\">Tomassi et al., 2024<\/a>).<\/p>\n<p>At a higher level, human-AI interactions take place within rapid cultural changes driven by technology that influence social norms of behavior. The nature of work is changing as physical and cognitive activities are offloaded to AI, and individuals are hired, fired, and monitored by AI (e.g., <a class=\"ArticleReference\" href=\"#B1\" id=\"B1a\" data-event=\"articleReference-a-b1\">Bankins et al., 2024<\/a>). Similarly, social relationships are increasingly structured around online communication, smartphone apps, and AI technology. Research on human\u2013machine systems shows that even peripheral AI agents can shape collective outcomes. Importantly, these outcomes arise from interdependent human\u2013human and human\u2013machine interactions, not from isolated actors (<a class=\"ArticleReference\" href=\"#B27\" id=\"B27a\" data-event=\"articleReference-a-b27\">Tsvetkova et al., 2024<\/a>). This means that a new challenge for social psychology is to treat AI as a social agent and theorize agency, trust, and moral influence in multi-agent (hybrid) systems. Because these hybrid systems are embedded within broader cultural ecologies, identity processes are simultaneously shaped by evolving social norms and collective narratives. Online identities are shaped by dynamic cultural trends, and how such trends play out will likely differ according to pre-existing cultural differences within and between nations.<\/p>\n<p>Research areas also differ in their focus on the individual vs. the social construction of identity. At the individual level, research builds on existing personality trait models, human-computer interaction (HCI) research, and applied cognitive psychology to investigate how individuals vary in their usage of AI-powered technologies and in their reactions to them. Beyond individual differences in human\u2013AI interaction, personality and social psychology must address identity. At the micro level, AI is now embedded in everyday settings and shapes how identities are activated and regulated through feedback. At the macro level, AI is part of the sociotechnical infrastructure through which collective meaning, moral values, and group boundaries are constructed, negotiated, and refined.<\/p>\n<p>Although research on personality, identity, and AI is thriving, it is often scattered and disconnected. Here, we aim to promote the coherence of research by identifying major research areas differentiated by the twin axes of (1) granularity vs. high-level, \u201cbig picture\u201d issues and (2) individual differences vs. social identity. Within this 2 \u00d7 2 scheme, we briefly define key research questions and new approaches to understanding personality in the digital age.<\/p>\n<p>Individual differences in interactions with AI<\/p>\n<p>Established personality dimensions such as internet anxiety, technophobia\/philia, and computer self-efficacy shape attitudes, emotions, and behaviors toward conventional computer systems (<a class=\"ArticleReference\" href=\"#B17\" id=\"B17a\" data-event=\"articleReference-a-b17\">Matthews et al., 2021<\/a>). However, the fundamental differences between conventional digital systems and modern AIs will change the role of personality in digital interactions. These differences include the vast cognitive capabilities of AI, social agency and natural language communication, the assumption of a consistent human-like persona, and a relationship history with the user. New scales for constructs such as trust in AI, humanlikeness, and social presence are proliferating (<a class=\"ArticleReference\" href=\"#B10\" id=\"B10a\" data-event=\"articleReference-a-b10\">Esterwood et al., 2021<\/a>). However, scale development is in its infancy. The quality and validity of these scales vary greatly, and there is no overarching psychometric model to support the integration of research findings.<\/p>\n<p>Research on the sources, development, and malleability of individual differences is also lacking. Concerns about the effects of smartphone use and social media on child development (<a class=\"ArticleReference\" href=\"#B12\" id=\"B12a\" data-event=\"articleReference-a-b12\">Haidt, 2024<\/a>) often neglect the nuances of individual differences in vulnerability to harmful impacts (Matthews et al., <a class=\"ArticleReference\" href=\"#B18\" id=\"B18a\" data-event=\"articleReference-a-b18\">in press<\/a>). Longitudinal studies of the development of attitudes toward AI are needed in both children and adults, because current research on new personality constructs is typically cross-sectional. Understanding the role of established, biologically-based traits such as the Big Five is essential, but some novel constructs, such as online personas, are less stable and more malleable than conventional traits (<a class=\"ArticleReference\" href=\"#B21\" id=\"B21a\" data-event=\"articleReference-a-b21\">Olivero et al., 2020<\/a>).<\/p>\n<p>A third research focus is the consequences of attitudes and emotions toward AI. Which personality factors influence offloading significant life decisions to AI, treating an AI as a friend or confidant, or allowing an AI agent to manage one&#8217;s social media interactions? These questions can be addressed through experimental studies and prospective studies examining how individual differences interact dynamically with AI usage.<\/p>\n<p>Stress and wellbeing in the age of AI<\/p>\n<p>The sociotechnical perspective on the impact of AI positions the individual within interacting technological, organizational, and cultural systems (<a class=\"ArticleReference\" href=\"#B15\" id=\"B15a\" data-event=\"articleReference-a-b15\">Matthews et al., 2025<\/a>; <a class=\"ArticleReference\" href=\"#B28\" id=\"B28a\" data-event=\"articleReference-a-b28\">Yu et al., 2023<\/a>), suggesting multiple goals for personality research. First, AI introduces novel threats such as uncertainty over the basis for AI judgments, social stressors associated with human-like interactions with AIs, and loss of self-esteem when AIs surpass humans in cognitive capabilities and decision-making authority (<a class=\"ArticleReference\" href=\"#B16\" id=\"B16a\" data-event=\"articleReference-a-b16\">Matthews et al., 2024<\/a>). There is an overarching threat to the person&#8217;s connection to reality or epistemic rationality. Maintaining well-being and a sense of cohesion depend not only on basic and AI-linked traits but also on socioeconomic status and the psychosocial environment (<a class=\"ArticleReference\" href=\"#B7\" id=\"B7a\" data-event=\"articleReference-a-b7\">Brunner and Marmot, 2006<\/a>). How will changes in social organization driven by AI interact with traits to produce stress at the individual and group levels?<\/p>\n<p>Second, AI enables both benefits and harms (e.g., <a class=\"ArticleReference\" href=\"#B5\" id=\"B5a\" data-event=\"articleReference-a-b5\">Bond et al., 2025<\/a>). AI can potentially improve quality of life by freeing individuals from mental drudgery, providing decision support based on a vast knowledge base, and powering therapeutic interventions. However, its knowledge base is contaminated by human biases, including racial and gender biases. Its outputs are frequently unexplained and unverifiable. It is indifferent to privacy, and it is readily utilized for malicious purposes, such as cybercrime and disseminating misinformation. In organizations, AI enables practices such as intrusive surveillance and algorithmic hiring and firing that dehumanize work relationships. Personalized alignment of AI to match the user&#8217;s personality, skills, and values can enhance its usefulness and widen access but also risks infringing on privacy and reinforcing bias (<a class=\"ArticleReference\" href=\"#B13\" id=\"B13a\" data-event=\"articleReference-a-b13\">Kirk et al., 2024<\/a>). Personalized alignment is more than a technical design issue. It raises difficult questions about how personalization should be bounded and who should decide on the principles that define those bounds (<a class=\"ArticleReference\" href=\"#B13\" id=\"B13a\" data-event=\"articleReference-a-b13\">Kirk et al., 2024<\/a>).<\/p>\n<p>Third, the usage and impact of AI reflect fluid and sometimes contested cultural norms, such as the level of government regulation of AI that should be implemented. There is also a growing body of literature on cross-cultural differences in attitudes toward AI and how these vary across cultural dimensions, such as individualism vs. collectivism (<a class=\"ArticleReference\" href=\"#B2\" id=\"B2a\" data-event=\"articleReference-a-b2\">Barnes et al., 2024<\/a>). Another research challenge is that AI&#8217;s increasing presence in our lives is likely to lead to new human cultures unlike anything in history, requiring emic perspectives on the attributes of personalities tied to digital culture to complement universal, etic models (<a class=\"ArticleReference\" href=\"#B17\" id=\"B17a\" data-event=\"articleReference-a-b17\">Matthews et al., 2021<\/a>).<\/p>\n<p>How does AI reshape the formation and regulation of identity in everyday social environments?<\/p>\n<p>AI systems do not merely deliver information but increasingly function as interactive social actors, as they respond, adapt, and mirror users, creating feedback loops that contribute to the activation, validation, and regulation of social identities (<a class=\"ArticleReference\" href=\"#B19\" id=\"B19a\" data-event=\"articleReference-a-b19\">Metzler and Garc\u00eda, 2023<\/a>; <a class=\"ArticleReference\" href=\"#B22\" id=\"B22a\" data-event=\"articleReference-a-b22\">Pedreschi et al., 2025<\/a>). Through these interactive processes, AI-mediated environments curate, amplify, and stabilize collective narratives that provide meaning, relevance, and moral orientation to group members (<a class=\"ArticleReference\" href=\"#B4\" id=\"B4a\" data-event=\"articleReference-a-b4\">Bliuc et al., 2024<\/a>; <a class=\"ArticleReference\" href=\"#B23\" id=\"B23a\" data-event=\"articleReference-a-b23\">Sartori and Theodorou, 2022<\/a>). These developments pose a theoretical challenge to our field because classic theories in social psychology assume that identities are shaped by interaction with other individuals, groups, and social institutions (<a class=\"ArticleReference\" href=\"#B24\" id=\"B24a\" data-event=\"articleReference-a-b24\">Spears, 2021<\/a>). However, AI introduces a novel \u201cinteraction partner,\u201d one that is highly responsive, personalized, scalable, and opaque, yet embedded in everyday communication, community participation, and information access. Existing theoretical models are not well-equipped to explain how identity dynamics operate when group-relevant narratives are curated or generated by adaptive systems. A central research question is whether AI-driven adaptation stabilizes social identities (perhaps by providing coherence, validation, and belonging) or narrows them by reinforcing exclusionary narratives and reducing identity flexibility. A related question concerns wellbeing\u2014for example, under what conditions do AI-mediated collective narratives support meaning, relevance, and psychological resilience, and when do they contribute to dependency or entrenchment? AI-augmented online communities provide an important test case. In contexts such as addiction recovery or mental health support groups, AI systems may facilitate positive identity change by supporting new recovery narratives (<a class=\"ArticleReference\" href=\"#B14\" id=\"B14a\" data-event=\"articleReference-a-b14\">Li et al., 2023<\/a>; <a class=\"ArticleReference\" href=\"#B25\" id=\"B25a\" data-event=\"articleReference-a-b25\">Thakkar et al., 2024<\/a>), but they may also reinforce static or dependent identities, posing additional risks(<a class=\"ArticleReference\" href=\"#B9\" id=\"B9a\" data-event=\"articleReference-a-b9\">Elyoseph et al., 2024<\/a>). Importantly, issues such as trust, vulnerability to misinformation, and polarization should be analyzed not only as cognitive biases but also as identity-regulatory processes operating through perceived group alignment and collective meaning-making (<a class=\"ArticleReference\" href=\"#B8\" id=\"B8a\" data-event=\"articleReference-a-b8\">Efstratiou and De Cristofaro, 2022<\/a>).<\/p>\n<p>AI as identity infrastructure: meaning, moral values, and boundaries<\/p>\n<p>AI systems in conjunction with online media are becoming a part of the infrastructure through which collective identities are constructed, represented, and evaluated. They can mediate how individuals and groups are seen by others, how collective narratives circulate and evolve, and how meaning and moral values are articulated and contested (<a class=\"ArticleReference\" href=\"#B11\" id=\"B11a\" data-event=\"articleReference-a-b11\">Gerbaudo, 2022<\/a>). Psychology currently lacks models of identity and agency for conditions in which aspects of identity are delegated to non-human agents (e.g., use of avatars, digital twins, and so on), personal and interaction data persist as identity traces, and social recognition is mediated by algorithmic systems rather than interpersonal or institutional processes (<a class=\"ArticleReference\" href=\"#B6\" id=\"B6a\" data-event=\"articleReference-a-b6\">Bonnefon et al., 2023<\/a>). These developments raise important questions about how collective meaning and moral order are sustained, both now and in the future. One risk concerns identity delegation and representation. When AI systems generate profiles, narratives, or predictions about individuals or groups, they relocate control over identity representation to system designers and other (ambiguous) operators, raising questions about agency, power, and voice. A second risk concerns collective meaning and moral values. Attitudes toward AI, privacy, and technological change often reflect group-based values and shared (ideologically bound) narratives rather than being driven by deliberative ethical reasoning, with implications for intergroup relations and polarization. A third risk concerns power and inequality, as AI-mediated systems may differentially amplify, normalize, constrain, or erase particular identities. If personality and social psychology fail to theorize identity, meaning, and moral boundaries under AI mediation, then we will lack empirically grounded and theoretically informed frameworks to detect, evaluate, and mitigate the potential engineering of identity by platforms, corporations, and state actors.<\/p>\n<p>Conclusion<\/p>\n<p>The AI revolution has stimulated a creative ferment of new research in personality and social psychology. We advocate for a more systematic approach than currently exists that centers on well-defined research questions related to individual differences in interactions with AI and the negotiation of identity in communities mediated by AI. <a class=\"ArticleReference\" href=\"#T1\" id=\"T1a\" data-event=\"articleReference-a-t1\">Table 1<\/a> summarizes some major topics for research that we have briefly discussed. These are not intended to be exhaustive, but they illustrate some areas open to focused investigation. There is also a need for multidisciplinary approaches that include perspectives from computer science, neuroscience, communication science, and sociology. We encourage researchers interested in these issues to consider the Personality and Social Psychology section of Frontiers in Psychology as an outlet for their studies.<\/p>\n<p>GranularBig-pictureIndividual differences in human-AI interaction\u2022Constructing psychometrically rigorous dimensional structures to integrate new constructs with existing trait models<br \/>\u2022Determining the sources, development, and malleability of personality factors linked to AI<br \/>\u2022Determining the consequences of personality factors for usage and engagement with AI\u2022Investigating vulnerability and resilience to the novel stressors of the digital age from a sociotechnical perspective<br \/>\u2022Personalizing AI at work and in other contexts to optimize the benefits-to-harms ratio at the individual and societal levels<br \/>\u2022Investigating how multiple aspects of culture shape a person&#8217;s interactions with AIConstruction of identity\u2022Identifying the mechanisms responsible for AI feedback loops (i.e., activating, stabilizing, or narrowing down social identities)<br \/>\u2022Examining how algorithmic curation shapes perceived group norms and narratives<br \/>\u2022Investigating identity flexibility vs. entrenchment under AI feedback loops<br \/>\u2022Examining trust, information disorders, and polarization as processes that regulate identity\u2022Theorizing identity delegation and representational authority under hybrid interactions<br \/>\u2022Understanding how AI systems shape moral boundaries and drive polarization<br \/>\u2022Investigating power asymmetries in AI-mediated identity amplification or erasure<br \/>\u2022Developing psychological, ethical, and moral frameworks for identity, agency, and representation in sociotechnical systems.<\/p>\n<p>A selection of research goals differentiated by granularity\/big picture and individual differences\/social identity perspectives.<\/p>\n<p>StatementsAuthor contributions<\/p>\n<p>GM: Writing \u2013 original draft, Writing \u2013 review &amp; editing. A-MB: Writing \u2013 review &amp; editing, Writing \u2013 original draft.<\/p>\n<p>Conflict of interest<\/p>\n<p>The author(s) declared that this work was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.<\/p>\n<p>The authors GM and A-MB declared that they were an editorial board member of Frontiers, at the time of submission. This had no impact on the peer review process and the final decision.<\/p>\n<p>Generative AI statement<\/p>\n<p>The author(s) declared that generative AI was not used in the creation of this manuscript.<\/p>\n<p>Any alternative text (alt text) provided alongside figures in this article has been generated by Frontiers with the support of artificial intelligence and reasonable efforts have been made to ensure accuracy, including review by the authors wherever possible. If you identify any issues, please contact us.<\/p>\n<p>Publisher\u2019s note<\/p>\n<p>All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.<\/p>\n<p>References<\/p>\n<p class=\"notranslate\">BankinsS.OcampoA. C.MarroneM.RestubogS. L. D.WooS. E. (2024). A multilevel review of artificial intelligence in organizations: implications for organizational behavior research and practice. J. Organ. Behav.45, 159\u2013182. doi: 10.1002\/job.2735<\/p>\n<p class=\"notranslate\">BarnesA. J.ZhangY.ValenzuelaA. (2024). AI and culture: culturally dependent responses to AI systems. Curr. Opin. Psychol.58:101838. doi: 10.1016\/j.copsyc.2024.101838<\/p>\n<p class=\"notranslate\">BettsJ. M.BliucA.-M.CourtneyD. S. (2026). The effect of charismatic influencers on polarization online: an agent-based modeling approach. Technol. Soc.85:103179. doi: 10.1016\/j.techsoc.2025.103179<\/p>\n<p class=\"notranslate\">BliucA.-M.BettsJ. M.VerganiM.BouguettayaA.CristeaM. (2024). A theoretical framework for polarization as the gradual fragmentation of a divided society. Commun. Psychol.2:75. doi: 10.1038\/s44271-024-00125-1<\/p>\n<p class=\"notranslate\">BondR. R.EnnisE.MulvennaM. D. (2025). How artificial intelligence may affect our mental well-being. Behav. Inform. Technol.44, 2093\u20132100. doi: 10.1080\/0144929X.2025.2520593<\/p>\n<p class=\"notranslate\">BonnefonJ.RahwanI.ShariffA. (2023). The moral psychology of Artificial Intelligence. Annu. Rev. Psychol.75, 653\u2013675. doi: 10.31234\/osf.io\/8ptdg<\/p>\n<p class=\"notranslate\">BrunnerE.MarmotM. (2006). \u201cSocial organization, stress, and health,\u201d in Social Determinants of Health, eds. M. Marmot and R. G. Wilkinson (Oxford: Oxford University Press), 17-43.<\/p>\n<p class=\"notranslate\">EfstratiouA.De CristofaroE. (2022). Adherence to misinformation on social media through socio-cognitive and group-based processes. Proc. ACM Hum.-Comput. Interact.6, 1\u201335. doi: 10.1145\/3555589<\/p>\n<p class=\"notranslate\">ElyosephZ.GurT.HaberY.SimonT.AngertT.NavonY.et al. (2024). An ethical perspective on the democratization of mental health with generative AI. JMIR Mental Health11:e58011. doi: 10.2196\/58011<\/p>\n<p class=\"notranslate\">EsterwoodC.EssenmacherK.YangH.ZengF.RobertL. P. (2021). \u201cA meta-analysis of human personality and robot acceptance in human-robot interaction,\u201d in Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems , 1\u201318. doi: 10.1145\/3411764.3445542<\/p>\n<p class=\"notranslate\">GerbaudoP. (2022). From individual affectedness to collective identity: personal testimony campaigns on social media and the logic of collection. New Media Soc.26, 4904\u20134921. doi: 10.1177\/14614448221128523<\/p>\n<p class=\"notranslate\">HaidtJ. (2024). The Anxious Generation: How the Great Rewiring of Childhood Is Causing an Epidemic of Mental Illness. New York, NY: Penguin. doi: 10.56315\/PSCF9-25<\/p>\n<p class=\"notranslate\">KirkH. R.VidgenB.R\u00f6ttgerP.HaleS. A. (2024). The benefits, risks and bounds of personalizing the alignment of large language models to individuals. Nat. Mach. Intell.6, 383\u2013392. doi: 10.1038\/s42256-024-00820-y<\/p>\n<p class=\"notranslate\">LiH.ZhangR.LeeY.KrautR.MohrD. (2023). Systematic review and meta-analysis of AI-based conversational agents for promoting mental health and well-being. NPJ Dig. Med.6:236. doi: 10.1038\/s41746-023-00979-5<\/p>\n<p class=\"notranslate\">MatthewsG.CumingsR.De Los SantosE. P.FengI. Y.MoulouaS. A. (2025). A new era for stress research: supporting user performance and experience in the digital age. Ergonomics68, 913\u2013946. doi: 10.1080\/00140139.2024.2425953<\/p>\n<p class=\"notranslate\">MatthewsG.HancockP.SzalmaJ.LinJ. Panganiban A. R. (2024). \u201cIndividual differences in teaming with artificial intelligence, robots, and virtual agents in the workplace,\u201d in The Oxford Handbook of Individual Differences in Organizational Contexts, eds. A.A. Oguz, A. Tuncdogan, H. Volberda, and K. de Ruyter (New York, NY: Oxford University Press), 345\u2013367. doi: 10.1093\/oxfordhb\/9780192897114.013.25<\/p>\n<p class=\"notranslate\">MatthewsG.HancockP. A.LinJ.PanganibanA. R.Reinerman-JonesL. E.SzalmaJ. L.et al. (2021). Evolution and revolution: personality research for the coming world of robots, artificial intelligence, and autonomous systems. Pers. Individ. Dif.169:109969. doi: 10.1016\/j.paid.2020.109969<\/p>\n<p class=\"notranslate\">Matthews G. Herzog D. Esau A. (in press). \u201cPersonality change in the digital age: a sociotechnical perspective,\u201d in Handbook of Personality Processes: Momentary, Daily, Developmental, Generational Perspectives, eds. M. Robinson, T. Pringle, B. Wilkowski (Cheltenham: Edward Elgar Press).<\/p>\n<p class=\"notranslate\">MetzlerH.Garc\u00edaD. (2023). Social drivers and algorithmic mechanisms on digital media. Perspect. Psychol. Sci.19, 735\u2013748. doi: 10.1177\/17456916231185057<\/p>\n<p class=\"notranslate\">O&#8217;LearyK. (2023). Human\u2013AI collaboration boosts mental health support. Nat. Med. doi: 10.1038\/d41591-023-00022-w. [Epub ahead of print]. <\/p>\n<p class=\"notranslate\">OliveroM. A.BertolinoA.Dom\u00ednguez-MayoF. J.EscalonaM. J.MatteucciI. (2020). Digital persona portrayal: identifying pluridentity vulnerabilities in digital life. J. Inform. Sec. Applic.52:102492. doi: 10.1016\/j.jisa.2020.102492<\/p>\n<p class=\"notranslate\">PedreschiD.PappalardoL.Baeza-YatesR.Barab\u00e1siA.DignumF.DignumV.et al. (2025). Human-AI coevolution. Artif. Intell.339:104244. doi: 10.1016\/j.artint.2024.104244<\/p>\n<p class=\"notranslate\">SartoriL.TheodorouA. (2022). A sociotechnical perspective for the future of AI: narratives, inequalities, and human control. Ethics Inf. Technol.24:4. doi: 10.1007\/s10676-022-09624-3<\/p>\n<p class=\"notranslate\">SpearsR. (2021). Social influence and group identity. Annu. Rev. Psychol.72, 367\u2013390. doi: 10.1146\/annurev-psych-070620-111818<\/p>\n<p class=\"notranslate\">ThakkarA.GuptaA.SousaA. (2024). Artificial intelligence in positive mental health: a narrative review. Front. Dig. Health6:1280235. doi: 10.3389\/fdgth.2024.1280235<\/p>\n<p class=\"notranslate\">TomassiA.FalegnamiA.RomanoE. (2024). Mapping automatic social media information disorder. The role of bots and AI in spreading misleading information in society. PLoS ONE19:e0303183. doi: 10.1371\/journal.pone.0303183<\/p>\n<p class=\"notranslate\">TsvetkovaM.YasseriT.PescetelliN.WernerT. (2024). A new sociology of humans and machines. Nat. Hum. Behav.8, 1864\u20131876. doi: 10.1038\/s41562-024-02001-8<\/p>\n<p class=\"notranslate\">YuX.XuS.AshtonM. (2023). Antecedents and outcomes of artificial intelligence adoption and application in the workplace: the socio-technical system theory perspective. Inform. Technol. People36, 454\u2013474. doi: 10.1108\/ITP-04-2021-0254<\/p>\n<p>Summary<\/p>\n<p class=\"h5\">Keywords<\/p>\n<p>artificial intelligence, personality, social identity, sociotechnical system, wellbeing<\/p>\n<p class=\"h5\">Citation<\/p>\n<p>Matthews G and Bliuc A-M (2026) Personality, identity, and Artificial Intelligence: a grand challenge. Front. Psychol. 17:1817687. doi: <a class=\"Summary__doi notranslate\" target=\"_blank\" href=\"http:\/\/dx.doi.org\/10.3389\/fpsyg.2026.1817687\" data-event=\"articleSummary-a-doi\" rel=\"nofollow noopener\">10.3389\/fpsyg.2026.1817687<\/a><\/p>\n<p class=\"h5\">Received<\/p>\n<p>25 February 2026<\/p>\n<p class=\"h5\">Accepted<\/p>\n<p>23 March 2026<\/p>\n<p class=\"h5\">Published<\/p>\n<p>24 April 2026<\/p>\n<p class=\"h5\">Volume<\/p>\n<p>17 &#8211; 2026<\/p>\n<p class=\"h5\">Edited and reviewed by<\/p>\n<p class=\"notranslate\"><a href=\"https:\/\/loop.frontiersin.org\/people\/7348\/overview\" target=\"_blank\" rel=\"nofollow noopener\">Axel Cleeremans<\/a>, Universit\u00e9 libre de Bruxelles, Belgium<\/p>\n<p class=\"h5\">Updates<\/p>\n<p><img decoding=\"async\" class=\"CrossmarkUpdates__img\" src=\"https:\/\/www.europesays.com\/ai\/wp-content\/uploads\/2026\/04\/crossmark_color.webp\" alt=\"Crossmark icon\"\/><\/p>\n<p>Check for updates<\/p>\n<p class=\"h5\"> Copyright <\/p>\n<p class=\"Summary__copyright__text\">\u00a9 2026 Matthews and Bliuc. <\/p>\n<p>This is an open-access article distributed under the terms of the <a href=\"https:\/\/creativecommons.org\/licenses\/by\/4.0\/\" target=\"_blank\" rel=\"nofollow noopener\">Creative Commons Attribution License (CC BY)<\/a>. The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.<\/p>\n<p class=\"notranslate correspondence\">*Correspondence: Gerald Matthews, <a href=\"https:\/\/www.frontiersin.org\/articles\/10.3389\/mailto:gmatthe@gmu.edu\" class=\"email-link\" rel=\"nofollow noopener\" target=\"_blank\">gmatthe@gmu.edu<\/a><\/p>\n<p class=\"h5\">Disclaimer<\/p>\n<p> All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher. <\/p>\n","protected":false},"excerpt":{"rendered":"Modern Artificial Intelligence (AI) is transforming personality research. Personality is expressed in a world in which individual aspirations&hellip;\n","protected":false},"author":2,"featured_media":360,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[2],"tags":[24,25,11400,11401,11402,11399],"class_list":{"0":"post-15188","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-ai","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-personality","11":"tag-social-identity","12":"tag-sociotechnical-system","13":"tag-wellbeing"},"_links":{"self":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/posts\/15188","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/comments?post=15188"}],"version-history":[{"count":0,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/posts\/15188\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/media\/360"}],"wp:attachment":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/media?parent=15188"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/categories?post=15188"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/tags?post=15188"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}