Samsung workers made a major error by using ChatGPT

10 comments
  1. “Samsung meeting notes and new source code are now in the wild after being leaked in ChatGPT

    Samsung workers have unwittingly leaked top secret data whilst using ChatGPT to help them with tasks. 

    The company allowed engineers at its semiconductor arm to use the AI writer to help fix problems with their source code. But in doing so, the workers inputted confidential data, such as the source code itself for a new program, internal meeting notes data relating to their hardware. 

    The upshot is that in just under a month, there were three recorded incidences of employees leaking sensitive information via ChatGPT. Since ChatGPT retains user input data to further train itself, these trade secrets from Samsung are now effectively in the hands of OpenAI, the company behind the AI service.

    Out in the OpenAI

    In response, Samsung Semiconductor is now developing its own inhouse AI for internal use by employees, but they can only use prompts that are limited to 1024 bytes in size. 

    In one of the aforementioned cases, an employee asked ChatGPT to optimize test sequences for identifying faults in chips, which is confidential – however, making this process as efficient as possible has the potential to save chip firms considerable time in testing and verifying processors, leading to reductions in cost too. 

    In another case, an employee used ChatGPT to convert meeting notes into a presentation, the contents of which were obviously not something Samsung would have liked external third parties to have known.

    Samsung Electronics sent out a warning to its workers on the potential dangers of leaking confidential information in the wake of the incidences, saying that such data is impossible to retrieve as it is now stored on the servers belonging to OpenAI. In the semiconductor industry, where competition is fierce, any sort of data leak could spell disaster for the company in question.

    It doesn’t seem as if Samsung has any recourse to request the retrieval or deletion of the sensitive data OpenAI now holds. Some have argued (opens in new tab) that this very fact makes ChatGPT non-compliant with the EU’s GDPR, as this is one of the core tenants of the law governing how companies collect and use data. It is also one of the reasons why Italy has now banned the use of ChatGPT nationwide “

  2. This means that if you share code with ChatGPT, you are effectively putting it in the public domain, because the AI is going to learn from it and use it to solve your competitor’s problems. This is quite frightening for any developer who wants to leverage a public AI to speed up project delivery.

  3. . Guardrails need to be put in place. All it takes is a multimillion dollar mistake to hit you.

  4. The workers messed up, but definitely not a fan of every company stealing our data and refusing to delete it if asked.

  5. Just few days ago Italy said that chat gpt was going to get banned for privacy problems and everyone in here were bashing it. So maybe there was a point.

  6. But the question is – How did anyone find out about this?

    The article doesn’t explain the source of this

  7. And that’s how Chatgpt could create a service for companies by offering a version of its product that keeps the information confidential from other users.

  8. Fake news! No employer has ever granted its workers to use a public chatbot to solve an issue with source code of the sensitive type.

    Come on!

  9. I was thinking about this. Eventually the optimal solution is for every organization to have its own LLM deployed. Centralization of AI, like we have right now with everyone using ChatGPT, is clearly a problem for intellectual property.

Leave a Reply