Gaze Tracking at 100 Hz

Novel approach to real-time gaze estimation using temporal fusion and geometric constraints. Achieves professional-grade performance without specialized hardware or GPU acceleration. View demonstration.

Real-time gaze tracking demonstration

Research Contributions

Bridging the gap between laboratory precision and deployable systems through algorithmic innovation

Continuous Gaze Vector Estimation

Full 3D gaze direction recovery with off-camera angle estimation. Enables precise attention modeling and eye-contact quantification beyond binary classification schemes.

Vector Output Off-Axis Detection

CPU-Only Architecture

Lightweight computational pipeline achieving 100 Hz sampling on commodity hardware. Eliminates GPU dependency through optimized geometric algorithms and efficient feature extraction.

Zero GPU Edge Computing

Privacy-Preserving Design

Complete on-device processing with no data transmission. Architectural approach ensures GDPR compliance and enables deployment in privacy-sensitive contexts.

Zero Transmission Local Processing

Temporal Fusion Robustness

Multi-frame temporal integration compensates for noise and illumination variability. Maintains stable tracking under challenging real-world conditions including low light and rapid movement.

Temporal Filtering Noise Resilient

Standard Hardware Compatibility

Functions with conventional RGB webcams without infrared illumination or depth sensors. Rapid calibration procedure enables immediate deployment across diverse user populations.

RGB Only Fast Calibration

Comprehensive Output Metrics

Simultaneous extraction of gaze coordinates, confidence estimates, head pose parameters, and ocular events. Provides rich behavioral signals for downstream analysis.

Multi-Modal Real-Time

Multi-Modal Integration

Combining visual, acoustic, and interaction signals for comprehensive behavioral assessment

Acoustic Event Classification

Real-time audio stream analysis for ambient activity detection. Enables contextual awareness in remote assessment and collaborative environments through acoustic pattern recognition.

Audio Processing Event Detection

Behavioral Input Analytics

Temporal analysis of interaction patterns including typing velocity, pause distributions, and input rhythm characteristics. Provides engagement proxies for attention state inference.

Keystroke Dynamics Cognitive Load

Speech-to-Text Pipeline

On-device speech recognition for accessibility applications and real-time transcription. Supports note-taking workflows and verbal interaction documentation in educational contexts.

ASR Local Processing

Attention State Modeling

Probabilistic fusion of gaze vectors, acoustic events, and input patterns to estimate engagement levels. Provides holistic behavioral signatures for activity classification.

Sensor Fusion Inference

Privacy-Aware Architecture

All audio and keystroke processing occurs locally with configurable retention policies. Raw sensor data never leaves the device, ensuring compliance with data protection regulations.

Data Sovereignty GDPR Compliant

Experimental Results

Performance characteristics from controlled evaluation protocols

~100 Hz
Sampling Frequency

Sustained high-frequency tracking with 10ms temporal resolution

0%
Server Dependency

Complete independence from cloud infrastructure and network connectivity

<30min
Integration Time

Rapid deployment from SDK initialization to production streaming

Application Domains

Validated deployments across telecommunication, education, and automotive sectors

Educational Technology

Multi-modal behavioral monitoring for remote assessment environments. Integrates visual attention, acoustic context, and interaction patterns for comprehensive engagement analysis.

  • Multi-screen detection
  • Attention trajectory analysis
  • Acoustic event classification
  • Typing behavior analytics
  • Privacy-compliant architecture
  • Local data processing

Automotive Systems

Driver monitoring without infrared illumination requirements. Enables eyes-off-road detection through efficient CPU-based processing.

  • Driver attention state estimation
  • Distraction event detection
  • Standard camera compatibility
  • Real-time classification

Collaboration Inquiries

Exploring research partnerships and technology transfer opportunities