AI is coming to your Apple devices. Will it be secure?

At its annual developers conference on Monday, Apple announced its long-awaited artificial intelligence system, Apple Intelligence, which will customize user experiences, automate tasks and – the CEO Tim Cook promised – will usher in a “new standard for privacy in AI”.
While Apple maintains its in-house AI is made with security in mind, its partnership with OpenAI has sparked plenty of criticism. OpenAI tool ChatGPT has long been the subject of privacy concerns. Launched in November 2022, it collected user data without explicit consent to train its models, and only began to allow users to opt out of such data collection in April 2023.Apple says the ChatGPT partnership will only be used with explicit consent for isolated tasks like email composition and other writing tools. But security professionals will be watching closely to see how this, and other concerns, will play out.
“Apple is saying a lot of the right things,” said Cliff Steinhauer, director of information security and engagement at the National Cybersecurity Alliance. “But it remains to be seen how it’s implemented.”
A latecomer to the generative AI race, Apple has lagged behind peers like Google, Microsoft and Amazon, which have seen shares boosted by investor confidence in AI ventures. Apple, meanwhile, held off from integrating generative AI into its flagship consumer products until now.

[Read More…]

Add a Comment

Your email address will not be published. Required fields are marked *

Skip to content