Samsung Temporarily Bans ChatGPT and Other Generative AI Tools Due to Security Concerns
According to a report by Bloomberg News, Samsung has recently implemented a ban on the use of generative AI tools such as ChatGPT on its internal networks and devices owned by the company. The reason behind this move is the concern that uploading confidential information onto these platforms could pose a security risk. In a memo to staff, Samsung described the restriction as temporary and stated that it is working on creating a secure environment that can safely accommodate the use of generative AI tools.
The primary focus of this ban is likely to be on ChatGPT, the chatbot developed by OpenAI. ChatGPT has gained widespread popularity, not only as a source of entertainment but also as a tool to aid in serious tasks. For instance, people can use the system to summarize reports or compose emails. However, this also means that sensitive information may be inputted into the system, potentially providing OpenAI with access to it.
The risks associated with privacy when using ChatGPT differ depending on how the service is accessed. If a company is using ChatGPT’s API, conversations with the chatbot are not visible to OpenAI’s support team and are not used to train the company’s models. However, if a user inputs text through the general web interface using its default settings, their conversations may be reviewed by OpenAI to improve its systems and ensure compliance with its policies and safety requirements. The company cautions users against sharing sensitive information during conversations and warns that any conversations may be used to train future versions of ChatGPT. Recently, OpenAI introduced a feature similar to a browser’s “incognito mode,” which does not save chat histories and prevents them from being used for training.
Samsung is concerned that its employees may inadvertently pose a security risk by using ChatGPT without realizing the potential consequences. In response, the company has implemented a temporary ban on the use of generative AI tools until it can establish a secure environment to support the safe use of such tools. In addition to restricting the use of generative AI on company-owned devices, Samsung is also advising staff not to upload sensitive business information through their personal machines.
Samsung has warned its staff that uploading sensitive information to generative AI tools like ChatGPT represents a security risk, according to Bloomberg. Samsung has banned the use of these tools on its internal networks and company-owned devices for the time being, while it works to develop a secure environment for using them. Employees who fail to comply with the security guidelines may face disciplinary action, including termination of employment. The ban was imposed after some of the company’s staff reportedly leaked internal source code by uploading it to ChatGPT, thereby risking public exposure and limiting the company’s ability to delete it later. Samsung plans to develop in-house solutions for AI tools, such as translation, document summarization, and software development. The company’s ban on generative AI tools does not extend to devices sold to consumers, such as laptops or phones. Other companies and institutions have also placed limits on the use of generative AI tools, citing concerns such as compliance, data protection, child safety, cheating, and misinformation.