As readers of this blog will know, I have a deep fascination with both disruption and AI, particularly regarding how they will impact our daily lives.
The power of AI to access the knowledge of the internet has already been made available to customers through ChatGPT and other “Large Language Models (LLMs),” and its use is now integral to every student’s life, increasingly becoming mainstream in business.
Nevertheless, it is the way we access and utilise this knowledge that will bring about the most significant paradigm shift in our daily lives, with hands-free voice likely to dominate.
A good comparator is how we used the internet when it first arrived. Do you remember those “noisy” modems to which we connected our bulky computers to access the web? Now, 70% of what you use is facilitated by your mobile phone. The phrase “Mobile first” is no longer a business’s term, as it has become an absolute default mode.
Interestingly, Open AI openly states that today’s “Hardware” does not support their vision of how customers should access AI and has made the sector’s first significant move by purchasing a start-up called “io” for $6.4 billion, even though it was only founded in 2024 by Jony Ive. Yes, I did say “Billions”!!!
Jony Ive is the legendary designer behind iconic products such as the Apple iPhone and iPod, collaborating with several former senior Apple engineers and designers, including Scott Cannon, Tang Tan, and Evans Hankey.
The primary motivation for this acquisition was to gain access to io’s elite development team, many of whom were instrumental in creating the original iPhone and other groundbreaking Apple devices. OpenAI’s CEO, Sam Altman, and Jony Ive had already collaborated for two years prior to the deal, sharing an ambition to create a “new family of AI-powered devices” that could redefine how people interact with computers and artificial intelligence.
OpenAI is refusing to even hint at what new piece of equipment they have in mind, considering the significant impact that the launch of China’s “Deep Seek” and their copycat LLM technology had on OpenAI’s share price. They will be copied, but just as Apple gained an early advantage and established a first-mover edge, it has managed to maintain that to this day.
I have long predicted that within two years, we will all have “Digital Twins” that know all our buying habits and research all our purchases by liaising not with humans but with businesses’ AI sales agents, to present us with a shortlist of choices to buy. Until today, I had merely assumed that this data would be kept securely by our phones, which would remain our primary tools.
However, I then unpacked my new Ray-Ban Meta Sunglasses today! Just WOW.
Much like the launch of the first Apple phone, looking stylish is essential when using new technology. This is why Meta’s collaboration with the iconic Ray-Ban sunglasses is significant, even though it would have been far more practical to produce reading glasses that would allow me to read the instruction manual when setting them up!
I had never previously considered how glasses naturally position speakers close to your ears, allowing Ray-Bans to eliminate the need for wearing headphones to listen to music or take phone calls. The sound quality is incredible, and unless you’re in a library, nobody around you is likely to hear the track you’re listening to, let alone your incoming conversation. Similarly, they are equipped with microphones positioned just above your mouth, which provide excellent voice capture.
However, listening to music and handling calls is hardly a new phenomenon. The key new applications are:
Photo and video AI analysis.
With one click of a button, I can start recording video or photographing my surroundings, with Meta’s AI analysing the output to describe what I am looking at.
Potential applications.
· Shopping. I can now walk around shops, look at an item and buy it online at a cheaper price to be delivered via Amazon Prime to my home, to avoid carrying it around.
· Look at Tell. The glasses are already AI-enabled and can analyse anything you’re observing, providing verbal feedback. For example, I asked them to summarise what I’m writing in this blog, and they instantly told me I was writing about the features of the new Meta glasses and that I like them. Just think about how powerful this is in the real world, for instance, reading a menu and telling me the best-reviewed items that match my typical tastes.
· Facial recognition. Our police and border forces will soon be equipped with these tools, as Harvard students seeking to identify girls’ details while walking around campus have demonstrated this is possible.
· Meeting Recording. We are already used to AI bots joining all our video calls and transcribing them to save time and increase accuracy. Our glasses will enable us to bring this technology into the real world.
Language Translation.
Turn on the translate function, and the glasses suddenly turn into the “Hitch Hiker’s Guide to the Galaxy” fictional “Babel Fish”. They pick up what is being said to you in Spanish (only three languages currently) and translate it with a 0.5-second delay into English, directly into your ears. Now, admittedly, it would be helpful if, like Google Translate on your phone, they could translate your response back, but understanding your surroundings is probably 80% of the job when navigating cities or travelling in general.
Potential applications
· Travel. These glasses make travelling the world infinitely easier.
· Business. How many times in meetings have you thought, I wish I knew what this lot were saying to each other!
Other similar products are set to launch in the next few months, and if the manufacturers manage to keep them looking cool, you can expect the next wave of smart glasses to take off truly. However, they will still require linking to our phones to keep the heavyweight computing away from our faces.
Glasses may not be the next big wearable tech, as Open AI invest billions into the next big thing, but assuming the trend of Star Trek tools coming to life, you would not bet against them.