New report from Infosys and MIT Technology Review Insights shows that trust, transparency and a ‘safe to fail’ culture are essential for scaling AI initiatives across global organizations
CAMBRIDGE, Mass. and BENGALURU, India, Dec. 16, 2025 /PRNewswire/ — A new global report by Infosys (NSE, BSE, NYSE: INFY) and MIT Technology Review Insights reveals that 83 percent of business leaders believe psychological safety directly impacts the success of enterprise AI initiatives. Creating psychological safety in an era of AI takes more than good intentions or blanket HR policies, it requires explicit messaging about AI’s realistic capabilities, limits and approved use cases. Through its collaboration with MIT Technology Review Insights, Infosys aims to equip global leaders with insights and strategies to adopt AI responsibly at scale, leveraging Infosys Topaz, an AI-first suite of services, solutions and platforms.
Infosys Logo
The report, “Creating Psychological Safety in the AI Era,” highlights how employees often hesitate to experiment, challenge assumptions or lead projects due to fear of backlash, which undermines innovation even when the technology capabilities exist. The report shows that despite major investments in AI, workplace fear – particularly fear of failure – remains one of the biggest barriers to adoption.
Despite rapid advances in AI technology, the report finds that human factors are holding enterprises back. Fear of failure, unclear communication and limited leadership openness often prevent employees from fully engaging with AI initiatives. In fact, organizations may have the tools and strategies in place, but without psychological safety, adoption falters. The findings highlight that scaling AI is as much about building trust and resilience within the workforce as it is about deploying cutting-edge systems.
The report’s key findings include:
A culture of psychological safety has greater success with AI projects. More than four out of five (83 percent) respondents say psychological safety has a measurable impact on the success of AI initiatives, and 84 percent report direct links between psychological safety and tangible business outcomes.
Fear is holding leaders back. While nearly one-quarter (22 percent) of respondents admit they have hesitated to lead or suggest an AI project because of fear of failure or potential criticism, encouragingly three-quarters (73 percent) indicated they feel safe to provide honest feedback and express opinions freely in the workplace.
Achieving psychological safety is a moving target. Fewer than half (39 percent) of respondents describe their current level of psychological safety as “high,” – yet 48 percent report a “moderate” degree of it. This highlights a gap where some enterprises are pursuing AI adoption on cultural foundations that are not yet fully stable.
Communication and leadership behaviors are critical levers. 60 percent of respondents say clarity on how AI will – and won’t – impact jobs would improve psychological safety the most, while just over half (51 percent) highlight leadership modeling openness to questions, dissent and failure as equally important.
Creating psychological safety takes more than good intentions or HR policies. It requires explicit messaging about AI’s realistic capabilities, limits and approved use cases. Clear communication and ongoing dialogue help companies prioritize transparency, ethics and stakeholder engagement.