Apple Tells Employees Not to Use ChatGPT Due to Data Leak Concerns
The tech giant company also advised workers not to use Microsoft's GitHub Copilot, which uses AI to generate code
By | MADELINE GARFINKLE | www.entrepreneur.com
Apple has prohibited employees from using ChatGPT and other artificial intelligence tools over fears of leaking confidential information, The Wall Street Journal reported.
According to an internal document viewed by the outlet as well as individuals familiar with the matter, Apple has restricted the use of the prompt-driven chatbot along with Microsoft’s GitHub Copilot (which uses AI to automate software code).
The company fears that the AI programs could release confidential data from Apple, per the outlet.
OpenAI (the creator of ChatGPT) stores all chat history from interactions between the chatbot and users as a way to train the system and improve accuracy over time, as well as be subjected to OpenAI moderators for review over any possible violations of the company’s terms of service.
Related: Walmart Leaked Memo Warns Against Employees Sharing Corporate Information With ChatGPT
While OpenAI released an option last month where users can turn off chat history, the new feature still allows OpenAI to monitor conversations for “abuse,” retaining conversations for up to 30 days before deleting them permanently.