About
Senior backend engineer and systems architect with 15+ years delivering production-grade technical solutions across AI/ML systems, computer vision, augmented reality, embedded systems, and full-stack applications.
Currently leading AI/ML pipeline development at Tastemade using multi-agent agentic workflows with LLM orchestration. Patent holder for groundbreaking augmented reality technology. Built multiple never-before-seen installations for major brands and venues including Citizen Watch, Formula 1, Foot Locker, the LA Clippers at Intuit Dome, and Ochsner Health.
Featured Projects
Omakase — Multi-Agent AI Recipe Processing Pipeline
Tastemade
Sophisticated agentic AI system using multi-layer validation architecture with Generate→Judge→Critic loops. Built using pydantic-graph workflow engine with type-safe models and Gemini-based LLM agents. Processes recipe content at scale with 93+% evaluation pass rate.
Technical Architecture
Generate→Judge→Critic loops with multi-iteration refinement (max 5 attempts). Three processing layers: Style Guide → Structuring → Ingredient Metadata. pydantic-graph workflow engine with type-safe Pydantic models orchestrating Gemini-based LLM agents.
Key Features
- Parallel processing with semaphore concurrency (3 items simultaneously)
- Kappo normalization agent for ingredient standardization
- Compound ingredient parsing and metadata extraction
- Batch-then-sequential judge evaluation strategy
- RFC 2119-compliant prompt standardization
- Deduplication and caching for efficiency
- Logfire observability integration throughout pipeline
- Template string generation with typed placeholders
- Idempotency patterns for reliable concurrent processing
Pre-Pipeline: LLM Training
Before Omakase, built the first-generation AI system by fine-tuning 50+ models on AWS SageMaker for recipe-specific tasks — ingredient extraction, step parsing, nutrition analysis, and more. A massive training effort with automated deployment workflows at production scale.
Related Systems
agent_swarm: Multi-agent coordination framework. anneal: Meal planning pipeline with constraint solving.
- Python
- Django
- pydantic-ai
- pydantic-graph
- Gemini API
- Celery
- PostgreSQL
- Logfire
- AWS SageMaker
Connectopia — Real-Time Generative AI for Live Arena Experiences
LA Clippers · Intuit Dome
A live, fully local generative-AI system deployed inside the LA Clippers' Intuit Dome. Fans collaboratively build a world in real time, then watch it materialize as a cinematic, audio-visual experience synchronized across massive displays, sound design, and architectural lighting. From a fan's perspective, it feels playful and immediate. Under the hood, it's a tightly orchestrated multi-stage AI pipeline operating entirely on-prem.
My Role
Designed and implemented the generative AI pipeline end-to-end: prompt generation, custom model training, image generation, video generation, upscaling, orchestration, and synchronization with sound and lighting systems.
How It Works
Fans interact with four 75" touch portals to build a "district" by selecting structured components — environment, style, architecture, and mood. These become structured inputs to the generative system. To ensure visual consistency, stylistic control, and reliability in a live venue, the image and video models were custom-trained with LoRAs specific to the project's categories and visual language.
The pipeline executes in three sequential stages with controlled fan-out:
- Prompt Generation (LLM) — A locally hosted LLM converts fan selections into three distinct image prompts, aligned with the trained visual styles.
- Image Generation — Three images generated in parallel using custom-trained image LoRAs.
- Multimodal Video Prompting + Video Generation — Each image is fed into a multimodal LLM, which analyzes it and generates three image-conditioned video prompts. Three videos generated in parallel at 1504×640 using project-specific video LoRAs, then stitched and crossfaded into a single ~30 second sequence.
The final video is upscaled twice using two different upscaling methods, then routed through the same control pipeline that drives spatial audio and arena lighting cues — ensuring all visual, audio, and lighting elements remain synchronized.
System Characteristics
- Fully local execution — RTX 4090 GPUs for image/video generation, Mac minis for LLM inference
- Custom-trained image and video LoRAs for style and category control
- No cloud dependencies
- Weatherproof, on-site hardware deployment
- Designed for continuous operation in a live arena environment
- Python
- FastAPI
- ComfyUI
- LTX Video
- Custom LoRAs
- gRPC
- Socket.IO
- Unreal Engine
- RTX 4090
Augmented Glass AR Platform
Patent-pending real-time 3D eye tracking AR system that powers never-before-seen commercial experiences. Core technology deployed to 12 international locations for Formula 1 and the world's first AR retail display at Macy's Herald Square for Citizen Watch.
System Architecture
Built a real-time computer vision system that tracks viewer eye position in 3D space at 60fps with sub-17ms latency. Core engine written in Python with OpenCV for face detection, eye tracking, and depth estimation. Outputs perspective-correct AR content that responds to viewer movement in real time.
Technical Components
- Core tracking engine: Python/OpenCV with custom algorithms for 3D position estimation
- AR rendering: Three.js-based configurator with real-time perspective correction
- LED controller: Custom tone-frequency-based protocol for synchronized lighting
- Deployment: Raspberry Pi and embedded systems for 24/7 operation
- Sub-17ms end-to-end latency from camera to display update
Applications
This platform powers multiple high-profile installations where traditional AR wouldn't work. The system creates the illusion of 3D objects floating inside glass displays by tracking where the viewer is looking and adjusting the rendered perspective in real time.
- Python
- OpenCV
- Three.js
- WebGL
- Raspberry Pi
- Real-time CV
Formula 1 Livery Experience
Interactive AR experience deployed at 12 global Formula 1 events. Fans customize F1 car liveries in real time, seeing their designs rendered on a life-size 3D car that responds to their viewing angle. Built on the Augmented Glass platform with custom F1-specific rendering.
Experience Design
Fans approach a large glass display showing a Formula 1 car. Using a touch interface, they select colors, patterns, and sponsor placements. The system tracks their eye position and renders the car from their exact viewing angle, creating a perfect AR illusion without headsets or phones.
Deployment Scale
Installed at 12 international F1 race events across multiple continents. Each installation required weatherproofing, reliable 24/7 operation, and calibration for different lighting conditions and display sizes.
- Augmented Glass Platform
- Three.js
- WebGL Shaders
- Real-time CV
- Embedded Systems
Citizen Watch Interactive Display
Custom-built LED lighting control system for Citizen Watch's display at Macy's Herald Square. Transparent OLED screens with dual LG 55" mechanically articulating doors showcase watches while a 26-channel LED array — one per watch — is synchronized with the video content through an audio-frequency control protocol.
Tone-Frequency Control Protocol
The video feed playing on the transparent OLED screens carries an embedded audio track with encoded control frequencies. The system listens for specific frequencies and tone lengths to determine lighting commands — on/off, brightness, effects like waves of light across the watches. Different frequency ranges (600-4200Hz) and durations map to different actions.
This means video designers control the entire physical lighting rig just by editing their audio track. No additional software, no programmer needed for lighting changes. They can choreograph complex lighting sequences purely through their video production workflow.
Engineering Challenge
Getting clean frequency detection in a retail environment with ambient noise was extremely complex. The system uses FFT-based frequency analysis with noise rejection requiring multiple consecutive detections before acting. 26-channel PWM control via dual PCA9685 boards with CIE 1931 lightness curves for perceptually smooth dimming.
- Python
- FFT Analysis
- PWM Control
- PCA9685
- Raspberry Pi
- Embedded Systems
MacroSync
Voice-first fitness platform where the entire app is controlled through conversation. This is not a calorie tracker with voice features — it's a conversational UI revolution. Push-to-talk for hands-free operation: log food, plan meals, adjust goals, track workouts, manage your pantry, and change settings without touching a single form or button.
Conversational Interface
Just say what you ate: "Had two eggs, toast with butter, and coffee with cream." The AI parses it, searches a nutrition database, calculates macros, and logs it with full breakdown. No scanning barcodes, no searching through lists, no typing portion sizes.
Core Features
- AI-powered food logging — speak naturally, get full macro breakdown
- Conversational meal planning and recipe generation based on your goals
- Smart meal alternatives — "What could I eat instead that's lower carb?"
- Pantry and fridge management — auto-generates grocery lists
- AI workout plan generation
- Strava integration for automatic cardio tracking
- AI Memories — learns your preferences and habits over time
- Visual meal timeline showing daily intake patterns
- Push-to-talk interface — hands-free operation while cooking or working out
Why It Matters
Traditional fitness apps require constant data entry: searching foods, adjusting portion sizes, clicking through forms. MacroSync removes all of that friction. The entire app adapts to how humans naturally communicate. It's the difference between using a keyboard and having a conversation.
- Swift
- iOS
- NLP
- AI/ML
- Voice Recognition
- Nutrition APIs
Athlete Leaderboard
Advanced cycling analytics platform with fantasy sports-style scoring system. Not just distance tracking — a sophisticated achievement-based points system with 50+ different badges, fitness/fatigue modeling, power zone analysis, and comparative performance metrics. Built for serious athletes who want deeper insights than Strava provides.
Fantasy Sports Scoring
Instead of just tracking distance, the system awards points for achievements across multiple categories: speed milestones, volume targets, elevation gains, weather challenges, Strava segment conquests, and duration benchmarks. Each achievement is worth different point values, creating a competitive scoring system similar to fantasy sports.
Advanced Analytics
- Fitness/Fatigue/Form (TSB) modeling — training stress balance over time
- Time in Heart Rate Zones — aerobic vs anaerobic breakdown per ride
- Power Zone analysis — structured training insights
- Best Power metrics — FTP, TSS, IF (Intensity Factor), NP (Normalized Power)
- Relative effort comparison charts — compare workouts normalized for intensity
- Monthly summary cards with achievement highlights
- Multi-bike equipment tracking with per-bike mileage and maintenance alerts
- Weather conditions logged per ride — temperature, wind, precipitation
- Scatter plot visualizations for distance vs elevation, power vs HR, and more
- Activity feed with per-ride achievement grids showing all earned badges
Achievement Categories
Over 50 different badges spanning speed achievements (20mph+ avg, 30mph+ max), volume milestones (century rides, metric centuries), elevation challenges (climbing over 2000ft, 5000ft), weather warrior badges (riding in rain/snow/heat), Strava segment conquests, and duration-based achievements (2hr+, 4hr+, 6hr+ rides).
Zero Manual Entry
Real-time auto-sync from Strava means zero manual data entry. Finish a ride, and within seconds it's analyzed, scored, and displayed with all achievements and metrics. Been in daily use for over 4 years tracking thousands of rides.
- React
- Node.js
- Strava API
- PostgreSQL
- Chart.js
- WebSockets
LifeDash AI — Autonomous Personal Finance Platform
A multi-agent AI system that autonomously manages your finances. Not a dashboard you check — an operating system that watches your bank accounts, email, and calendar, then acts on what it finds. Natural language interface for creating tasks and agents. Runs entirely on-device.
Multi-Agent Architecture
Up to 8 concurrent agents run on independent schedules — daily, weekly, monthly, nightly. Each agent is specialized: subscription monitoring, spending analysis, Amazon purchase tracking, grocery reports, food & dining trends, email triage, duplicate charge detection, and calendar/meeting alerts. Agents queue tasks, execute autonomously, and surface results in a live feed.
Core Capabilities
- Real bank account integration via Teller.io — read-only access to live transaction data
- AI-powered subscription detection and duplicate charge identification
- Merchant-specific spending reports with breakdowns, trends, and comparisons
- Gmail integration — searches receipts, flags urgent emails, drafts cancellation messages
- Calendar integration — meeting alerts and event monitoring
- Natural language agent creation — describe what you want monitored and the system builds and schedules the agent
- Scheduled reporting — daily spending, weekly grocery, monthly subscription audits, nightly trend analysis
- Inbox with priority-ranked items requiring attention
- 24-hour activity monitoring with live status dashboard
- Preference memory — learns your rules and applies them across agents
Natural Language Control
Create agents and tasks by describing them conversationally: "Create me an agent that warns me before meetings on my calendar" or "Create a daily subscription overview that lists all entertainment subscriptions. Make it run at 16:27." The system interprets the request, determines the schedule, and sets up the monitoring — no configuration UI needed.
- Next.js
- Python
- FastAPI
- Claude API
- Teller.io
- Gmail API
- Calendar API
Computer Vision Systems
Production TensorFlow-based head tracking system and Face API with real-time emotion, age, and gender prediction. Deployed in commercial environments and forked by other developers for their own projects.
- Python
- TensorFlow
- OpenCV
- Real-time CV
Ochsner Health Installations
7+ production embedded systems for healthcare facilities including face-responsive LED installation and multi-sensor environmental networks. Built to meet strict healthcare facility safety and reliability standards.
- C++
- Teensy
- Arduino
- LED Control
Technical Skills
Languages
Python, JavaScript/TypeScript, Swift, C++, SQL, Ruby
AI/ML
LLM Fine-tuning, Multi-Agent Systems, pydantic-ai, Gemini, Computer Vision, TensorFlow, PyTorch
Backend
Django, Flask, FastAPI, Node.js, RESTful APIs, Celery, Concurrent Processing
Frontend
React, Next.js, Three.js, TypeScript, Swift, Cross-platform Mobile
Real-Time
WebSocket Streaming, 60fps Processing, Sub-17ms Latency Architecture
Embedded/Hardware
Arduino, Teensy, C++, Multi-channel LED Control, Servo Systems, IoT
Cloud/DevOps
AWS SageMaker, Vercel, Railway, Docker, Git/GitHub, CI/CD
Key Achievements
Contact
Available for consulting, contract work, and full-time opportunities.