Apple continued its slow-and-steady approach to integrating artificial intelligence into devices like the iPhone, Mac, and Apple Watch on Monday, announcing a raft of new features and upgrades at WWDC. The company also premiered the Foundation Models framework, a way for developers to write code that taps into Apple’s AI models.
Among the buzzier AI announcements at the event was Live Translation, a feature that translates phone and FaceTime calls from one language to another in real time. Apple also showed off Workout Buddy, an AI-powered voice helper designed to provide words of encouragement and useful updates during exercise. “This is your second run this week,” Workout Buddy told a jogging woman in a demo video. “You’re crushing it.”
Apple also announced an upgrade to Visual Intelligence, a tool that uses AI to interpret the world through a device’s camera. The new version can also look at screenshots to do things like identify a product or summarize a webpage. Apple showcased upgrades to Genmoji and Image Playground, two tools that generate stylized images with AI. And it showed off ways of using AI to automate tasks, generate text, summarize emails, edit photos, and find video clips.
The incremental announcements did little to dispel the notion that Apple is playing catch up on AI. The company does not yet have a model capable of competing with the best offerings of OpenAI, Meta, or Google, and still hands some challenging queries off to ChatGPT.
Some analysts suggest that Apple’s more incremental approach to AI development is warranted.
“The jury is still out on whether users are gravitating towards a particular phone for AI driven features,” says Paolo Pescatore, an analyst at PP Foresight. “Apple needs to strike the fine balance of bringing something fresh and not frustrating its loyal core base of users,” Pescatore adds. “It comes down to the bottom line, and whether AI is driving any revenue uplift.”