As manufacturers adopt artificial intelligence, much of the focus has been on tools, systems and performance. But according to Chris Draper, CEO and co-founder of morriganAI and author of Safe AI Basics and co-author of Governing Artificial Intelligence, the real risk lies in how humans interact with those systems over time. 

Draper says this dynamic is one of the most overlooked aspects of AI implementation, especially in environments where humans are expected to monitor automated systems. 

“If you put a human in a system to do something repetitively, the effectiveness of that human will always vary as a function of time and shift and experience.” 

In many cases, companies assume that simply keeping a person “in the loop” ensures safety. But Draper argues that’s not necessarily true. 

“Just having a human in a system does not make AI safe.” 

He warns that poorly designed systems — especially those where humans are expected to passively monitor performance — can actually increase risk. 

“Those are incredibly dangerous.” 





Looking for quick answers on assembly and manufacturing topics?


Try Ask ASM, our new smart AI search tool.













Ask ASM →

Draper describes three models for integrating humans into AI systems: “in the loop,” “on the loop” and “out of the loop.” In a human-in-the-loop system, work stops unless a person verifies each step, creating a high level of safety but limiting efficiency. In a human-out-of-the-loop system, processes are fully automated with no human intervention, which can be effective if designed correctly. However, Draper says most companies are implementing “on-the-loop” systems, where technology runs independently and humans are expected to catch errors, a model he warns carries significant risk. 

“Those have to be rolled out very thoughtfully. I argue in most of my strategy work, you should never have a human on the loop system in production, because a human on the loop, in production, is like a Tesla. It can theoretically be autopilot. But if it starts working well enough, the human who is supposed to be your safety feature gets worse. They get comfortable. They start missing errors because they assumed it was right 80% of the time, And then they just start missing a majority of that 20% they’re supposed to catch.” 

Draper has spent much of his career working on how humans interact with complex, high-risk technologies. Alongside Tom Foreman, he also co-created the AI Action Lab to help organizations better understand how AI behaves in real operational settings. 

At the core of his perspective is a fundamental reframing of what AI actually is, and what it is not. 

“If you look at your AI implementation as a technology process, it will always fail. There’s no way around that.” 

Rather than viewing AI as a traditional automation tool, Draper argues it operates in a completely different category. 

“AI is not… like video conferencing. It’s a tool much more like YouTube.” 

That distinction matters because AI does not simply execute predefined instructions. It interprets inputs and generates outputs based on probability. 

“Every AI tool… keeps continually generating synthetic information.” 

This introduces a level of uncertainty that manufacturers are not always prepared for. 

“AI is an ultra-hazardous technology. If I have a small change in hazard, I have potentially exponential impact consequences.” 

One of the most common mistakes, Draper says, is assuming AI can be layered onto existing processes to improve efficiency. 

“AI as a tool that is just going to speed up the process that stays the same is never going to return for you.” 

In some cases, that approach can amplify existing problems. 

“With AI, you can go bankrupt faster than ever before.” 

This is especially true when organizations deploy AI agents without fully understanding how they function or how closely they need to be monitored. 

A key issue, Draper explains, is that many organizations misunderstand the role of AI within a system. Rather than replacing human decision-making, AI should be seen as augmenting it. 

That means the success of AI depends heavily on where and how humans are positioned within the process. 

“If you put the human in the wrong place, you’re going to have significant, serious issues.” 

For manufacturers, the takeaway is that AI is not simply a technology investment. It is a shift in how decisions are made, how systems operate and how humans interact with those systems. 

“It will always be something that is amplifying, accelerating the human.” 

As adoption continues, companies that recognize that reality, and design systems around it, will be better positioned to avoid the hidden risks that come with AI implementation.