Samsung doesn’t want its staff to completely trust generative AI like ChatGPT when it comes to the company’s sensitive information. As such, the South Korean company is now banning the use of generative AI services within its workforce.
“We ask that you diligently adhere to our security guideline and failure to do so may result in a breach or compromise of company information resulting in disciplinary action up to and including termination of employment,” Samsung said in a memo reviewed by Bloomberg News.
According to the report, the company conducted a survey in April, showing 65% of the respondents believed that using AI tools within the workplace could translate to possible security issues. Interestingly, Samsung also discovered last month that some of its engineers “accidentally” leaked internal source code when they uploaded it to ChatGPT in April.
The report shared that Samsung released the memo stressing security concerns. It fears that confidential information submitted to AI tools is stored on external servers, which is beyond the control of Samsung and could lead to public leaks.
The memo warned staff in a specific division about using generative AI tools and encouraged them to follow company policies, underscoring that those who refuse to do so could face punishments. Despite the warning, Samsung noted in the memo to staff that it would be temporary, sharing intentions to “create a secure environment” for using tech like ChatGPT.
“HQ is reviewing security measures to create a secure environment for safely using generative AI to enhance employees’ productivity and efficiency,” the memo reads. “However, until these measures are prepared, we are temporarily restricting the use of generative AI.”
Samsung joined other companies (e.g., JPMorgan Chase & Co., Bank of America Corp., and Citigroup Inc.) that have already banned the use of generative AI in their respective places due to the same concerns. The move from such companies, nonetheless, is not a surprise — especially after a recent ChatGPT bug that temporarily exposed chat history and possibly payment info. And given that OpenAI could access the information being fed to its ChatGPT creation, companies have all the right to fear their employees’ use of such an AI tool.
Despite this, OpenAI has always been vocal in warning its users not to share “any sensitive information.” As explained by OpenAI itself, it “may use Content you provide us to improve our Services, for example to train the models that power ChatGPT.” It is important to note, however, that ChatGPT already has an incognito mode feature. Yet, it might not still be a sufficient assurance for companies, especially since a simple security issue could spell a huge business disaster.