That’s why Apple has restricted the use of OpenAI’s ChatGPT and Microsoft’s Copilot, wall street journal reports, ChatGPT has been on the banned list for months, Bloomberg‘S Mark Gurman says,
It’s not just Apple, it’s also SAMSUNG And Verizon in the world of technology who’s who of the banks (Bank of America, Citi, Deutsche Bank, Goldman, Wells Fargo and JP Morgan). This is due to the possibility of confidential data escaping; In any event, ChatGPT Privacy Policy clearly states Your signals may be used to train its models, unless you opt out. The fear of being leaked is not unfounded: in March, a bug in ChatGPT Data disclosed to other users,
Is there a world where Disney would allow Marvel spoilers to leak?
I’m inclined to think of these sanctions as very loud warning shots.
One of the obvious uses of this technology is in customer service, where companies try to reduce costs. But for customer service to work, customers have to provide their details – sometimes personal, sometimes sensitive. How do companies plan to secure their customer service bots?
This isn’t just a problem for customer service. Let’s say Disney decides to let AI go — instead of VFX departments – Write its Marvel movies. Is there a world where Disney would allow Marvel spoilers to leak?
is one of the things usually The truth about the tech industry is that early stage companies – like a younger iteration Facebook, for example — don’t pay much attention to data security. In that case, it makes sense to limit exposure to sensitive ingredients, such as OpenAI only gives you suggestions, (“Please don’t share any sensitive information in your conversations.”) This isn’t an AI-specific problem.
It’s possible that these big, knowledgeable, privacy-focused companies are just going crazy
But I’m curious whether there are intrinsic problems with AI chatbots. One of the expenses that comes with doing AI is compute. Building your own data center is expensive, but using cloud compute means your queries are being processed on a remote server, where you’re essentially relying on someone else to keep your data secure. You can see why banks might be apprehensive here – financial data is incredibly sensitive.
On top of accidental public leaks, there is also the potential for intentional corporate espionage. At first blush, this looks more like a tech industry issue – after all, trade secret theft is one of the risks here. But the big tech companies moved into streaming, so I wonder if this isn’t a problem for the creative end of things as well.
There’s always a push-and-pull between privacy and utility when it comes to tech products. In many cases – for example, Google and Facebook – users have exchanged their privacy for free products. google’s Bard is clear that the questions will be used “To improve and develop Google products, services and machine-learning technologies.”
It’s possible that these big, knowledgeable, privacy-focused companies are just being paranoid and have nothing to worry about. But suppose they are right. If so, then I can think of a few possibilities for the future of AI chatbots. The first is that the AI wave originates exactly like the Metaverse: a non-starter, The second is that pressure is put on AI companies to overhaul and clearly outline security practices. The third is that every company that wants to use AI will have to build its own proprietary model or at least run its own processing, which sounds very expensive and difficult. And the fourth is an online privacy nightmare, where your airline (or debt collectorpharmacy, or whoever) regularly leaks your data.
I don’t know how it moves. But if the companies that are the most security-obsessed are winding down their AI use, there may be good reason for the rest of us to do the same.










