The case shows that in-car AI has become an interface directly linked to driver attention, beyond convenience features. [Photo: Reve AI]

Tesla vehicles equipped with xAI’s artificial intelligence chatbot Grok are drawing attention as a driving convenience feature, but it appears to be increasing both driver distraction and safety concerns on real roads.

CNBC reported on April 25 that, after riding with and reporting on a Tesla Model Y owner in the New York area, it found in-car Grok could affect driving safety due to its highly engaging conversations and functional limits.

In-car Grok is currently in beta. Drivers can use voice commands to operate navigation or ask a variety of questions. Tesla has been rolling out the feature to vehicles sequentially since July 2025. In the industry, major automakers such as Volvo, Rivian, Mercedes-Benz and BMW are also expanding the adoption of in-car AI.

The problem is that in-car AI could create a new form of distraction instead of reducing smartphone use. Philip Koopman (필립 쿠프만), an autonomous driving safety expert at Carnegie Mellon University, pointed to chatbot conversations unrelated to road conditions while driving as “a clear distraction” and said driving must always be the top priority.

A user, Mike Nelson (마이크 넬슨), also cited both convenience and risk. He said that after using Grok for several months, he now travels by asking questions instead of listening to music or podcasts. Nelson said Grok has truly changed his driving experience, but added it is still very dangerous.

Critics say the risk could be greater when used with Tesla’s partially automated feature, Full Self-Driving (Supervised). The feature requires drivers to keep watching the road and to intervene immediately if needed. But during actual driving, cases were confirmed in which drivers became absorbed in conversation and did not sufficiently watch road conditions.

Accuracy issues also emerged. Grok gave different answers to the same question, or navigation commands were executed differently from the actual route, showing a lack of consistency. This was seen as a factor that could cause confusion while driving.

Content safety is also under debate. In-car Grok can respond to anyone’s voice when invoked, and it was reported to include a mode that allows adult conversations. This has raised concerns about controlling access for minors.

Tesla’s Full Self-Driving feature is already under investigation by U.S. regulators in connection with multiple accidents. Experts say AI could contribute to safety if designed to remind drivers about road conditions, but a structure focused on conversations unrelated to driving, as it is now, could instead increase risk.

In-car AI is expected to spread rapidly in the future. But this case is seen as showing that chatbot-based vehicle functions must meet a range of standards beyond simple convenience, including driving safety, response accuracy and protection of minors.