Apple has banned employees from using AI tools like OpenAI’s ChatGPT because confidential information entered into these systems would be leaked or collected.
according to a reports from wall street journalApple employees have also been warned against using GitHub’s AI programming assistant Copilot. Bloomberg reporter mark gurman Tweeted that ChatGPT had been on Apple’s list of restricted software “for months”.
Apple has good reason to be wary. By default, OpenAI stores all interactions between users and ChatGPT. These conversations are collected to train OpenAI’s systems and can be inspected by moderators for violations of the company’s terms and services.
In April, OpenAI rolled out a feature that let users turn off chat history (coincidentally, not long after various EU nations began probing the tool for potential privacy breaches), but this setting Despite being enabled, OpenAI still retains interactions for 30 days with the option enabled. to review them “for abuse” before permanently deleting them.
Given ChatGPT’s usefulness for tasks such as code improvements and brainstorming, Apple may be concerned that its employees will be entering confidential project information into the system. This information can then be viewed by moderators of OpenAI. Research shows that it is also possible Extract training data from some language models using its chat interface, although there is no evidence that ChatGPT itself is vulnerable to such attacks.
Although Apple’s ban is notable, OpenAI launched an iOS app for ChatGPT this week. The app is free to use, supports voice input, and is available in the US. OpenAI says it will soon launch the app in other countries with an Android version.









