AI is being integrated into just about everything now, for better or worse. From just about every section of Microsoft Windows to the cars we drive. The latter of which is my focus today, specifically with Tesla vehicles and the Grok AI.

Grok is a formidable AI, as it is near-Gemini quality (as one example) within the vehicle, helping to answer questions, provide thoughtful insight, provide directions, and now, reminders as well (based on location). It offers interesting conversations with the user at times, and potentially acts as a solution for keeping the driver awake if they find themselves a little lethargic.

However, it is also well known for hallucinations, as it continuously gets caught making false promises or actions it cannot follow through on.

For one example, as I was speaking to it, while looking for specific products, it gave some examples of what I was looking for. I asked if it could send that to me somehow, and it said to check my phone. That surprised me as I didn’t think it could do that (I was just winging it). I asked Grok to explain where I can find these notifications it is sending me, and it said to check my notifications coming from the Tesla app. When I explained I had none in either Android notifications or within the Tesla app, it suddenly claimed it was unable to send anything to the Tesla app. When I explained that it just told me it could, it claimed that it didn’t say that. Then, when I explained exactly what it had said, it apologized and claimed it might have overstepped, confirming it isn’t able to communicate with my phone.

Another example came from the other day, when I said, “Hey Grok, call the nearest GameStop.” It replied with “Calling the GameStop on (such and such st)”, then paused a second, and continued with “the phone number is….” (and stopped). So I asked why it claimed to be calling GameStop, but then shifted to just reading off the number. Grok replied, saying it was unable to make phone calls. Which tells me that for a moment, it thought it could, but then retracted that idea.

The issue is that stuff like this happens a lot. Making it potentially confusing for the user if they don’t fully understand the ins and outs of modern AI. Without being able to have reasonable assumptions of what the AI should be (or likely is) capable of, it could be difficult for the user to grasp the true power of the AI as a tool.

This can also be seen in another way, though, as it could be seen as a hint of features to come in a half-baked design. Features that have already been partially programmed into the AI, with the idea that they will be coming soon. Although this would still be considered buggy since it shouldn’t be referencing them at all if they aren’t read yet (these features should be completely sandboxed until ready or properly separated within a predev / public design). This could be seen as rushed work caused by heavy pressure on the design team behind Grok’s integration with Tesla vehicles.

Either way, the AI still has a long way to go before it reaches any level that users typically fantasize about (by that, I refer to reflecting upon where technology is going and where people would like it to be). And yes, it does feel like mistakes are being made, likely due to those heavy amounts of pressure on design/programming teams. But when it is ready, I am sure it will be quite fascinating.