Last week we saw Apple’s “Keynote” conference, announcing the new iPhone and Apple Watch range for 2023/2024.
During this conference, a very important announcement was made by Apple. This announcement, which was probably not understood by all observers, was the ability for the new version of the Apple Watch Series9, to use Siri assistant, directly from the watch, and without any internet connection.
Everyone knows Siri, the assistant that was introduced in 2012 on the iPhone 4S (2), and although the promise was great, it’s clear that for all these years, users have shunned Siri because of its lack of performance, and its ability to answer simple questions out of turn.
Why is this Apple announcement important?
This Siri update is important because it marks a change of direction for Apple. Up until now, Siri has only worked “online”, in the exact same way as Amazon’s Alexa, Google Home, and so on.
When a user asks Siri a question, an audio recording is made, then transmitted directly to Apple’s servers to:
- Automatic text transcription (ASR)
- Use NLP to interpret the user’s query
- Generate a response to the user
All this processing is carried out on the server side. This requires dedicated data centers for Apple, and an Internet connection for the user, all with the efficiency we’ve come to expect from Siri. Apple’s announcement now shows a very different path. Since everything can be done directly on the watch, thanks to two elements:
- The new SiP S9 embedded processor with a 4-core neural engine.
- The first Apple-deployed version of an in-house language model. For the moment, this will work in English and Mandarin when the watch is released.
This announcement confirms a rumor that Apple is actively working on its own language model. Above all, unlike OpenAI or Google, Apple will be relying on in-house hardware optimized for machine learning, thanks to their Apple Silicon.
And what exactly is Apple Silicon?
Apple Silicon is the new range of processors released by Apple in 2020. They replace Intel processors on all Macs. These new processors are based on the Apple Bionic architecture (A1, A2, etc.). This architecture has already been deployed for several years on the iPhone, iPad, Apple Watch, etc.
Based on ARM architecture, these processors have been designed by Apple to deliver high power with very low power consumption. Its phones, watches, and laptops are equipped with them, offering an unrivaled ratio of power and autonomy.
In 2017, Apple introduced dedicated “cores” optimized for calculations around “machine learning”: the “Neural Engine”. These NPUs (Neural Processing Units) also exist in other manufacturers, such as Huawei, Samsung, Qualcomm, NVIDIA, etc.
In 2020, Apple will release its M1 processor, the smallest in the range. The computing power of the Neural Engine is 11 trillion operations per second with a maximum power consumption of 39 watts!
Several researchers then compared this new architecture with NVIDIA professional boards.
NVIDIA is thestate of the art in “machine learning” cards.
The comparison with the AppleM1 in the various articles is quite surprising, without appeal. Its machine-learning capabilities are impressive for $600 hardware. By way of comparison, an NVIDIA card can cost several thousand dollars.
What's it for?
Apple designed this type of component for machine learning very early on. It enables iPhones, iPads, etc. to embed models using neural networks to recognize a user’s fingerprint (Touch ID), and face (Face ID), but also to hijack a photo taken by an iPhone, and so on.
The advantage for Apple is that it can implement all these functions without relying on an Internet connection. This saves on the direct costs and maintenance of new data centers dedicated to these tasks. Let’s not forget that OpenAI spends around $700k per day.
Among other things, to maintain the server infrastructure that runs their GPT models. If you’re interested, I explained the reasons for this in my previous article about ChatGPT.
The attentive reader will point out that this also enables Apple to bring these new functionalities to new hardware ranges (as with the latest Apple Watch).
Let’s not forget that Apple’s core business is selling hardware. And this gives people like me an extra argument to change iPhone or Mac even more regularly.
To sum up my thoughts: it’s both diabolical and brilliant 🙂
Apple’s other idea is the privacy argument. Because as its old advertising campaign said in 2019: What happens on your iPhone, stays on your iPhone.
This is undoubtedly true, apart from today’s Siri, iCloud, Apple Music, and so on.
What does Apple have in store for us tomorrow?
Tomorrow, Apple has already announced the release of a mixed-reality headset, the Apple Vision Pro.
It will also make use of the capacity of these on-board processors, so that the headset can position virtual objects in the user’s field of vision, and detect the positioning of eyes, fingers, and arms, for a better user experience.
Finally, let’s dream of a Siri that works like a ChatGPT. Operating without an Internet connection (as with this new Apple Watch), directly on your Mac, iPhone, or iPad, using Apple’s broad language models. Each user will have his or her own assistant, running on his or her own hardware, without any dependency, and for greater confidentiality of exchanged data.
Want to understand how our conversational AI platform optimizes your users’ productivity and engagement by effectively complementing your enterprise applications?