Apple is working on smart glasses without a display. The device may focus on AI-powered interaction rather than augmented reality. This marks a shift from the approach seen in Apple Vision Pro. The tech giant aims to bring simplicity and everyday usability to its smart glasses.
Apple will use audio feedback and voice responses as its primary method of communication. The lightweight design enables more comfortable extended wear. The feature assists in extending battery life, which stands as a critical issue for wearable devices.
The device may have dual camera systems. One camera will function to take pictures and record videos. The second camera tracks both the user’s environment and their hand movements. The system needs this configuration to achieve contextual understanding.
The system supports both gesture-based input methods and real-time processing. Apple will develop gesture controls that users can use to navigate the interface . Users could perform actions with simple hand movements. The cameras detect gestures, which they interpret through visuals. The system eliminates the need for touch panels and buttons. Users can interact with the system through touchless controls.
You can now use Siri as your primary interface. Delivering responses tailored to user queries. The glasses will process visual input to deliver immediate responses. The device functions as an AI assistant because it provides more than a visual display.
Apple is likely to avoid using complex hardware components like LiDAR and sophisticated AR technologies. The company seems to choose between a lightweight design and extended battery performance as its main focus. However, critics raised concerns that reliable gesture recognition might not work with a single low-resolution camera and additional hardware like neural bands or eye-tracking systems.
Also read: Google Upgrades Gemini with File Generation, Moves Closer to Full Productivity Suite