Apple has given us a vision of what their VR and AR future might entail. But as have others pointed out numerous times, the whole point of the showcase at the WWDC 23 was to let people experiment, I’ve heard others say that it’s like the launch of the Apple Watch when Apple didn’t really know what would become of it. This is similar and yet different.
Just like the Apple Watch (and the iPad before it), Apple sought to porting its whole ecosystem onto a watch – granted, the Apple Watch can’t live on its own and a better comparison would probably be the iPad. The iPad can live without any other Apple device and unlike the iPhone, never really had a clearly defined function other than to watch movies and browse the web. It was not until it gained the ability to be used with a pencil that artists and designers started to explore the potential.
I’m trying to point out that Apple took 5 years from the first iPad in 2010 to the iPad Pro with pencil in 2015 to find its “killer-app”. But there’s another aspect here that’s equally important: capability – it’s thanks to its chip design, that the iPad Pro was even possible or rather had sufficient processing power while draining the battery cost-consciously to make this work the way Apple sets its experience standards.
Enter the Vision Pro – already it’s called Pro. I think the strategy may in fact be the opposite. Apple decided to go in at the high-end to get as much experience out of it as possible and setup the ecosystem as rich as it can be, figuring out what its greatest opportunities are before deciding to focus on the consumer tier. And this could make sense because I suspect all this will be enabled by Apple’s chip strategy. Today’s Vision Pro has an M2 chip built-in; that’s a chip that’s in the league with the latest Desktop offerings from AMD and Intel but at the fraction of the power usage. Apple clearly signals that this will always be a standalone device, just like the iPad and the App that will need to be rewritten to make use of the “Spatial” will need the power to run just like on the desktop. What I’m trying to summarise is the following: Apple is trying to speed up the emergence of the killer-app by throwing everything and the kitchen-sink into the Vision Pro, re-creating the same ecosystem that it already has for its computers, iPhone and iPad and at the same time, working on making the M-series of chips even more power-efficient so that in a couple of years, a headset will emerge that looks more like the glasses we are already wearing to correct our vision but with the power of an M2 and the battery life of an iPad (shrunk even further).
It's also foreseeable that for a consumer tier, some of the processing power of the consumer headset may in fact be offloaded to a Mac computer (or an iPad or any other powerful Apple computing device that exists in a couple of years) to tie the customer even closer to the Apple ecosystem all under the “premise” that the consumer devices is less expensive but you also need another Apple device.
Apple’s future looks indeed impenetrable – people will flock to these headsets for the same reason they buy iPhones and iPads. But there’s also the potential that it could end up as an utter failure – Apple may not care, even if it pumps $50 billion into this development over the years, that’s almost pocket change in terms of Apple’s balance sheet and in the end, they will find uses for their R&D.
Comments
Post a Comment