When the Ray-Ban Meta Smart Glasses launched last fall, they were a pretty neat content capture tool and a surprisingly solid pair of headphones. But they were missing a key feature: multimodal AI. Basically, the ability for an AI assistant to process multiple types of information like photos, audio, and text. A few weeks after launch, Meta rolled out an early access program, but for everyone else, the wait is over. Multimodal AI is coming to everyone.
The timing is uncanny. The Humane AI Pin just launched and bellyflopped with reviewers after a universally poor user experience. It’s been somewhat of a bad omen hanging over AI gadgets. But having futzed around a bit with the early access AI beta on the Ray-Ban Meta Smart Glasses for the past few months, it’s a bit premature to completely write this class of gadget off.
First off, there are some expectations that need managing here. The Meta glasses don’t promise everything under the sun. The primary command is to say “Hey Meta, look and...” You can fill out the rest with phrases like “Tell me what this plant is.” Or read a sign in a different language. Write Instagram captions. Identify and learn more about a monument or landmark. The glasses take a picture, the AI communes with the cloud, and an answer arrives in your ears. The possibilities are not limitless, and half the fun is figuring out where its limits are.
For example, my spouse is a car nerd with their own pair of these things. They also have early access to the AI. My life has become a never-ending game of “Can Meta’s AI correctly identify this random car on the street?” Like most AI, Meta’s is sometimes spot-on and often confidently wrong. One fine spring day, my spouse was taking glamour shots of our cars: an Alfa Romeo Giulia Quadrifoglio and an Alfa Romeo Tonale. (Don’t ask me why they love Italian cars so much. I’m a Camry gal.) It correctly identified the Giulia. The Tonale was also a Giulia. Which is funny because, visually, these look nothing alike. The Giulia is a sedan, and the Tonale is a crossover SUV. It’s really good at identifying Lexus models and Corvettes, though.
I tried having the AI identify my plants, all of which are various forms of succulents: Haworthia, snake plants, jade plants, etc. Since some were gifts, I don’t exactly know what they are. At first, the AI asked me to describe my plants because I got the command wrong. D’oh. Speaking to AI in a way that you’ll be understood can feel like learning a new language. Then it told me I had various succulents of the Echeveria, aloe vera, and Crassula varieties. I cross-checked that with my Planta app — which can also identify plants from photos using AI. I do have some Crassula succulents. As far as I understand, there is not a single Echeveria.
0 Comments