
Ocular Rig v0.7 1/2024-7/2025
This project began with a spark of inspiration from Iron Man—a dream of a futuristic, heads-up display. But the path from that idea to a functional prototype was my most challenging yet. It took over a year of relentless iteration, with more than seven failed 3D prints and designs.
Each failure, however, was a lesson. I had to get creative, constantly rethinking the design and solving problems I never anticipated. What started as a sci-fi inspired prototype evolved into something with genuine purpose. I integrated sensors and visual alerts, pivoting the glasses' function to help visually- and hearing-impaired users navigate their environment more safely.
Concept

These smart glasses feature an integrated heads-up display controlled by capacitive touch sensors on the frame, allowing users to cycle through a suite of tools. The system utilizes on-device machine learning, utilizing Edge Impulse model to run real-time models for object detection and sound classification, providing users with environmental alerts and data directly within their field of view.

3D-Printed Components
Printer: Anycubic Mega X
Slicer: Ultimaker Cura
Material: PLA (Silver, Black, Semi-transparent)
1. Main Rig (Frame)
-
Integrated Adjustment Rails allow micro-adjustment of key components.
-
Ergonomical design contours to the natural shape of the human head, distributing weight evenly and minimizing pressure points
2. Projector Module
-
The housing for the projection unit : OLED, Prism, Convex lens
3. Lens Holder
-
Secures beam-splitting prism
4. Nose Rest
5. Battery Box
-
The enclosure for the lithium battery, which includes an click switch
6. Camera Holder
-
Secures the camera module
7. Microcontroller Container
-
The case for the Xiao Esp32-s3

Physics Principles of The Lens
The display system employs folded optics beginning with a micro-OLED source. The initially inverted image undergoes geometric correction via a right-angle prism using total internal reflection.
A convex lens then acts as a magnifier, creating a virtual image at a comfortable diopter offset for near-eye viewing. This optical design transforms the small source into a magnified, focusable image despite the short physical path.
Final image superposition is achieved through a beam-splitting lens, which reflects the magnified virtual image path into the user's field of view while maintaining high transmittance of ambient light. This creates a retinal-overlaid augmented reality experience without requiring the eye to accommodate on the physical OLED panel.
Archives (Earlier Prototypes)
March 2024
My first iteration relied on Bluetooth communication with a smartphone app built using MIT App Inventor. The concept was simple: let the phone do the heavy processing while the glasses display the information. The app handled all the complex processing - things like navigation, notifications, and data fetching. The phone would then send simple serial commands via Bluetooth to an Arduino Uno with an HC-05 module. It was a proof-of-concept, but relying on a phone was clunky.


I experimented with cheap materials, using a piece of plastic film instead of a proper beam-splitting lens and a larger, simpler OLED. The image was distorted and terrible, but it proved the basic optical idea could work. There were many failed 3D prints and dead ends along the way.
Aug 2024 onwards
This iterative process led to the current standalone version. I moved away from Bluetooth, integrated the powerful Xiao ESP32-S3, and finally used real optical components to create a clean, self-contained device. Every mistake was a step forward.























