GEXT
2025–present · Founder
Working prototype, not yet miniaturized
GEXT is a compact micro-gesture wearable for controlling interfaces without voice and without touching a screen. It's designed to be silent, discreet, and private, with a low learning curve and a clear focus: being able to navigate and control an interface in real situations without pulling out your phone or talking to an assistant.
The challenge
We use more and more devices in contexts where voice doesn't fit and touch isn't practical. Voice works well for simple actions, but fails on privacy, context, or precision when you need continuous interaction. In daily life there are many moments when you want to control something without making noise: shared spaces, meetings, commuting, or walking. Or your hands are simply busy and you can't reach for your phone. And if you also need fine control to navigate, select, move a pointer, or scroll, there's still no portable solution that's comfortable to use frequently.
What it enables
GEXT is designed to cover two complementary needs with a small set of natural micro-gestures. On one hand, quick navigation to move through interfaces without friction. On the other, pointer control when needed: cursor, click, and scroll. The goal isn't to make gestures to launch commands, but to have a practical way to control a full interface without relying on voice or a touchscreen.
How it's different
There are already EMG-based approaches (bands that interpret muscle signals) aimed at subtle, private interaction. GEXT pursues the same goal, but with a different strategy: instead of inferring intent through muscle signals, it detects the gesture directly and translates it into standard input. The focus is on making control repeatable, easy to learn, and useful for real cursor and navigation tasks.
How it works and what I've built
I've integrated miniaturized sensors to interpret micro-gestures and convert them into input actions like navigation, cursor, click, and scroll. The device is worn on the hand and connects via Bluetooth Low Energy as a standard peripheral.
I've developed the system end-to-end, with iterative hardware and firmware prototypes, functional integration as a Bluetooth mouse and keyboard, and several interaction modes for navigation and pointer control. I've also created validation tools to test real flows and speed up iteration, with the goal of reducing learning friction and improving consistency and speed.
Demo & IP
In September 2025, I conducted private demos in Silicon Valley, presenting the project to technical profiles at companies like Google, Neuralink, Intel, and Logitech. The project has a patent application filed.
Current status
It currently works as a Bluetooth mouse and keyboard, with several operating modes. The focus is on turning it into a daily-use experience: consistency, ease of learning, and speed in everyday tasks.