Large-language models (LLMs) like ChatGPT are getting so good, in fact, that they’re starting to make traditional homework like essays and conceptual questions feel almost obsolete.

“The no-AI case doesn’t exist anymore,” says Robert Bray, a Kellogg associate professor of operations who teaches a course on how to use generative AI effectively. “If it’s homework, students are going to be using AI.”

From Bray’s perspective, to resist this trend is to fight a losing battle. But if he could adapt AI models to guide students through their assignments instead—and not simply spew out answers the way standard LLMs tend to do—then students could lean on AI as a helping hand rather than a crutch. That would give students the opportunity not only to build their AI skills but also, perhaps for the first time, feel excited about doing their homework.

“If students are going to be using AI anyway, this gives educators a way to strategically customize AI and have some control over the operations,” he says. “But the question is, ‘Can professors actually create an AI homework experience that outperforms default models that are already insanely good?’”

Bray took this challenge head on, focusing on one of his data-analytics courses for MBA students.

The course consisted of 20 teaching sessions and 19 quizzes that tested students’ ability to analyze operational data using code, with the help of ChatGPT. Each quiz had corresponding optional homework that helped prepare students for the quiz.

Bray collaborated with Sébastien Martin, also a Kellogg associate professor of operations, to create a customized AI agent specifically for this course. They fed the existing ChatGPT-4o base model a set of instructions that explained each homework assignment and how the AI should assist students with them. The AI’s imperative was to interact with students as a kind of virtual tutor.

For example, the professors prompted the model to “help the students solve each question, but don’t blurt out the answer. Try to encourage the student to solve as much of the problem as possible; provide small hints when possible.”

“Whereas ChatGPT thinks its job is to give you the right answer, our AI agent actually knew that the job was to teach the students,” Bray says.

Ahead of each quiz, Bray randomly assigned half of the students to complete the homework using this AI tutor. He assigned the other half to do so using standard ChatGPT. Then he compared the students’ experience using the AI tutor versus ChatGPT over a total of 17,946 homework questions.

Overall, the students not only preferred using the AI tutor over ChatGPT but also found the AI tutor more helpful.

Nearly twice as many students reported having a “very positive experience” using the AI tutor for their homework (47 percent of students), compared with using ChatGPT (26 percent). And 40 percent said the AI tutor was “very helpful,” compared with 30 percent for ChatGPT. Both of these differences were statistically significant. The professors also found that students’ preference for the AI tutor grew stronger as the assignments got more complex.

“Students tend to like it if you make the homework assignment look more like a video game, where you’re chatting with an AI agent that’s intuitive,” Bray says.

Even though students favored the AI tutor, using it did not ultimately change the amount of time they spent doing the homework or the number of homework questions they attempted to solve. There was also no meaningful difference in average quiz scores when students did their homework with the AI tutor versus ChatGPT.

But that’s not necessarily a knock against AI. “It’s well-established in the literature that it’s very, very difficult for almost any technological innovation in the classroom to meaningfully influence performance on exams,” Bray explains. “Actual quiz performance comes down more to student conscientiousness—how much focus individual students are putting in and how motivated they are to study.”

Indeed, the fact that the AI tutor helped improve students’ experience with homework is a worthwhile achievement in its own right. “Homework is something students have to do,” Bray says, “so if we can make it a better experience, that’s a win.”

What’s more, the study’s findings can be applied to all kinds of teams and businesses beyond the classroom.

“All companies do some form of training, and people learn on the job more than they learn in school,” Bray says. “So if you just put a little bit of effort into instructing an AI [model] how it should behave—and train it to be a kind of tutor—it can actually give workers a more-pleasurable, better experience.”