About
Senior backend engineer and systems architect with 15+ years delivering production-grade technical solutions across AI/ML systems, computer vision, augmented reality, embedded systems, and full-stack applications.
Currently leading AI/ML pipeline development at Tastemade using multi-agent agentic workflows with LLM orchestration. Patent holder for groundbreaking augmented reality technology. Built multiple never-before-seen installations for major brands and venues including Citizen Watch, Formula 1, Foot Locker, the LA Clippers at Intuit Dome, and Ochsner Health.
Featured Projects
Omakase — Multi-Agent AI Recipe Processing Pipeline
Tastemade
Sophisticated agentic AI system using multi-layer validation architecture with Generate→Judge→Critic loops. Built using pydantic-graph workflow engine with type-safe models and Gemini-based LLM agents. Processes recipe content at scale with 93+% evaluation pass rate.
Technical Architecture
Generate→Judge→Critic loops with multi-iteration refinement (max 5 attempts). Three processing layers: Style Guide → Structuring → Ingredient Metadata. pydantic-graph workflow engine with type-safe Pydantic models orchestrating Gemini-based LLM agents.
Key Features
- Parallel processing with semaphore concurrency (3 items simultaneously)
- Kappo normalization agent for ingredient standardization
- Compound ingredient parsing and metadata extraction
- Batch-then-sequential judge evaluation strategy
- RFC 2119-compliant prompt standardization
- Deduplication and caching for efficiency
- Logfire observability integration throughout pipeline
- Template string generation with typed placeholders
- Idempotency patterns for reliable concurrent processing
Pre-Pipeline: LLM Training
Before Omakase, built the first-generation AI system by fine-tuning 50+ models on AWS SageMaker for recipe-specific tasks — ingredient extraction, step parsing, nutrition analysis, and more. A massive training effort with automated deployment workflows at production scale.
Related Systems
agent_swarm: Multi-agent coordination framework. anneal: Meal planning pipeline with constraint solving.
- Python
- Django
- pydantic-ai
- pydantic-graph
- Gemini API
- Celery
- PostgreSQL
- Logfire
- AWS SageMaker
Connectopia — Real-Time Generative AI for Live Arena Experiences
LA Clippers · Intuit Dome
A live, fully local generative-AI system deployed inside the LA Clippers' Intuit Dome. Fans collaboratively build a world in real time, then watch it materialize as a cinematic, audio-visual experience synchronized across massive displays, sound design, and architectural lighting. From a fan's perspective, it feels playful and immediate. Under the hood, it's a tightly orchestrated multi-stage AI pipeline operating entirely on-prem.
My Role
Designed and implemented the generative AI pipeline end-to-end: prompt generation, custom model training, image generation, video generation, upscaling, orchestration, and synchronization with sound and lighting systems.
How It Works
Fans interact with four 75" touch portals to build a "district" by selecting structured components — environment, style, architecture, and mood. These become structured inputs to the generative system. To ensure visual consistency, stylistic control, and reliability in a live venue, the image and video models were custom-trained with LoRAs specific to the project's categories and visual language.
The pipeline executes in three sequential stages with controlled fan-out:
- Prompt Generation (LLM) — A locally hosted LLM converts fan selections into three distinct image prompts, aligned with the trained visual styles.
- Image Generation — Three images generated in parallel using custom-trained image LoRAs.
- Multimodal Video Prompting + Video Generation — Each image is fed into a multimodal LLM, which analyzes it and generates three image-conditioned video prompts. Three videos generated in parallel at 1504×640 using project-specific video LoRAs, then stitched and crossfaded into a single ~30 second sequence.
The final video is upscaled twice using two different upscaling methods, then routed through the same control pipeline that drives spatial audio and arena lighting cues — ensuring all visual, audio, and lighting elements remain synchronized.
System Characteristics
- Fully local execution — RTX 4090 GPUs for image/video generation, Mac minis for LLM inference
- Custom-trained image and video LoRAs for style and category control
- No cloud dependencies
- Weatherproof, on-site hardware deployment
- Designed for continuous operation in a live arena environment
- Python
- FastAPI
- ComfyUI
- LTX Video
- Custom LoRAs
- gRPC
- Socket.IO
- Unreal Engine
- RTX 4090
Augmented Glass AR Platform
Patent-pending real-time 3D eye tracking AR system that powers never-before-seen commercial experiences. Core technology deployed to 12 international locations for Formula 1 and the world's first AR retail display at Macy's Herald Square for Citizen Watch.
System Architecture
Built a real-time computer vision system that tracks viewer eye position in 3D space at 60fps with sub-17ms latency. Core engine written in Python with OpenCV for face detection, eye tracking, and depth estimation. Outputs perspective-correct AR content that responds to viewer movement in real time.
Technical Components
- Core tracking engine: Python/OpenCV with custom algorithms for 3D position estimation
- AR rendering: Three.js-based configurator with real-time perspective correction
- LED controller: Custom tone-frequency-based protocol for synchronized lighting
- Deployment: Raspberry Pi and embedded systems for 24/7 operation
- Sub-17ms end-to-end latency from camera to display update
Applications
This platform powers multiple high-profile installations where traditional AR wouldn't work. The system creates the illusion of 3D objects floating inside glass displays by tracking where the viewer is looking and adjusting the rendered perspective in real time.
- Python
- OpenCV
- Three.js
- WebGL
- Raspberry Pi
- Real-time CV
Formula 1 Livery Experience
Interactive AR experience deployed at 12 global Formula 1 events. Fans customize F1 car liveries in real time, seeing their designs rendered on a life-size 3D car that responds to their viewing angle. Built on the Augmented Glass platform with custom F1-specific rendering.
Experience Design
Fans approach a large glass display showing a Formula 1 car. Using a touch interface, they select colors, patterns, and sponsor placements. The system tracks their eye position and renders the car from their exact viewing angle, creating a perfect AR illusion without headsets or phones.
Deployment Scale
Installed at 12 international F1 race events across multiple continents. Each installation required weatherproofing, reliable 24/7 operation, and calibration for different lighting conditions and display sizes.
- Augmented Glass Platform
- Three.js
- WebGL Shaders
- Real-time CV
- Embedded Systems
Citizen Watch Interactive Display
Custom-built LED lighting control system for Citizen Watch's display at Macy's Herald Square. Transparent OLED screens with dual LG 55" mechanically articulating doors showcase watches while a 26-channel LED array — one per watch — is synchronized with the video content through an audio-frequency control protocol.
Tone-Frequency Control Protocol
The video feed playing on the transparent OLED screens carries an embedded audio track with encoded control frequencies. The system listens for specific frequencies and tone lengths to determine lighting commands — on/off, brightness, effects like waves of light across the watches. Different frequency ranges (600-4200Hz) and durations map to different actions.
This means video designers control the entire physical lighting rig just by editing their audio track. No additional software, no programmer needed for lighting changes. They can choreograph complex lighting sequences purely through their video production workflow.
Engineering Challenge
Getting clean frequency detection in a retail environment with ambient noise was extremely complex. The system uses FFT-based frequency analysis with noise rejection requiring multiple consecutive detections before acting. 26-channel PWM control via dual PCA9685 boards with CIE 1931 lightness curves for perceptually smooth dimming.
- Python
- FFT Analysis
- PWM Control
- PCA9685
- Raspberry Pi
- Embedded Systems
MacroSync
AI-first nutrition and training platform that behaves like a personal coaching operating system, not a form-based tracker. Built around natural-language understanding, multimodal input, personalization memory, and adaptive planning across food logging, coaching, meal strategy, and progress optimization. Voice is supported, but only as one interface layer in a broader AI architecture.
AI-First Product Design
The core experience is an AI decision engine for nutrition and training. Users log meals and goals through natural language or photos, and the system resolves ingredients, estimates nutrition, computes macros, and returns contextual recommendations. Input mode is flexible, but the product value comes from orchestration and reasoning quality.
Core Features
- Natural-language and photo-assisted food logging with macro/micronutrient estimation
- AI meal planning, substitutions, and recommendation generation based on goals
- Pantry and fridge memory with automated grocery list intelligence
- AI workout planning and coaching suggestions
- Strava integration for automatic cardio ingestion and analysis
- AI Memories layer that learns preferences, routines, and constraints over time
- Timeline-based nutrition insights and progress analytics
- In-app AI assistant for contextual follow-up and one-tap execution
- Optional voice input for hands-free capture workflows
Why It Matters
Most fitness products are manual logging tools with superficial AI layers. MacroSync inverts that: AI handles interpretation, planning, and guidance while the interface minimizes friction. The result is a system that feels proactive and personalized rather than reactive and form-driven.
- React Native
- TypeScript
- NLP
- AI/ML
- LLM APIs
- Nutrition APIs
- Strava API
Athlete Leaderboard
Fantasy-football-style monthly competition platform for cycling built for friend groups. Strava activities auto-sync into a custom scoring engine with live rankings, projected outcomes, and achievement-based points. On top of the competition loop, the app layers serious training intelligence: fitness/fatigue/form modeling, zone analytics, power metrics, and long-term progression tracking.
Competition Engine
Each month runs like a fantasy league season: users choose the leaderboard metric (for example moving time), then compete through an achievement-driven points system. The platform calculates per-athlete points, rank movement, and projected outcomes in real time as rides sync from Strava.
Advanced Analytics
- Configurable monthly leaderboard modes (moving time, distance, elevation, and more)
- Live rank updates with probability-style projections and score breakdowns
- Achievement table per athlete showing exactly how points were earned
- Fitness/Fatigue/Form (TSB) modeling — training stress balance over time
- Time in Heart Rate Zones — aerobic vs anaerobic breakdown per ride
- Power Zone analysis — structured training insights
- Best Power metrics — FTP, TSS, IF (Intensity Factor), NP (Normalized Power)
- Relative effort comparison charts — compare workouts normalized for intensity
- Monthly summary cards with achievement highlights
- Multi-bike equipment tracking with per-bike mileage and maintenance alerts
- Weather conditions logged per ride — temperature, wind, precipitation
- Scatter plot visualizations for distance vs elevation, power vs HR, and more
- Activity feed with per-ride achievement grids showing all earned badges
Achievement Categories
50+ scoring achievements spanning speed tiers, weekly volume milestones, elevation targets, weather challenges, segment efforts, and duration benchmarks. This creates a game loop where consistency, intensity, and strategy all matter — not just total miles.
Zero Manual Entry
Real-time Strava auto-sync means no manual logging. Finish a ride and the system scores it, updates standings, and refreshes analytics automatically. The platform has been in daily use for over 4 years across thousands of activities.
- React
- Node.js
- Strava API
- PostgreSQL
- Chart.js
- WebSockets
LifeDash AI — Autonomous Personal Finance Platform
An AI-first personal finance operating system that runs as a real iPad app and Mac app — both built end-to-end by me. It continuously monitors bank activity, inboxes, subscriptions, and calendars, then executes autonomous agent workflows and surfaces only what needs attention. This is proactive financial operations, not passive dashboarding.
Multi-Agent Architecture
Up to 8 concurrent agents run on independent schedules (daily, weekly, monthly, nightly) and handle specialized workflows: subscription audits, merchant spend analysis, duplicate charge detection, Amazon tracking, grocery and dining reports, calendar/event monitoring, and urgent email triage. Agents queue work, execute autonomously, and publish outcomes into a live operational feed and inbox.
Core Capabilities
- Built both client apps: dedicated iPad app and Mac desktop app with shared AI workflow model
- Real bank account integration via Teller.io — read-only access to live transaction data
- AI-powered subscription detection and duplicate charge identification
- Merchant-specific spending reports with breakdowns, trends, and comparisons
- Gmail integration — searches receipts, flags urgent emails, drafts cancellation messages
- Calendar integration — meeting alerts and event monitoring
- Natural language agent creation — describe what you want monitored and the system builds and schedules the agent
- Scheduled reporting — daily spending, weekly grocery, monthly subscription audits, nightly trend analysis
- Inbox with priority-ranked items requiring attention
- 24-hour activity monitoring with live status dashboard
- Preference memory — learns your rules and applies them across agents
Natural Language Control
Agents and recurring jobs can be created from plain language requests such as calendar-alert monitors, subscription summaries, or merchant reports with specific run times. The system translates intent into schedule configuration and monitoring logic, then executes automatically without manual setup screens.
- Mac App
- iPad App
- Python
- FastAPI
- Claude API
- Teller.io
- Gmail API
- Calendar API
Computer Vision Systems
Production TensorFlow-based head tracking system and Face API with real-time emotion, age, and gender prediction. Deployed in commercial environments and forked by other developers for their own projects.
- Python
- TensorFlow
- OpenCV
- Real-time CV
Ochsner Health Installations
7+ production embedded systems for healthcare facilities including face-responsive LED installation and multi-sensor environmental networks. Built to meet strict healthcare facility safety and reliability standards.
- C++
- Teensy
- Arduino
- LED Control
Technical Skills
Languages
Python, JavaScript/TypeScript, Swift, C++, SQL, Ruby
AI/ML
LLM Fine-tuning, Multi-Agent Systems, pydantic-ai, Gemini, Computer Vision, TensorFlow, PyTorch
Backend
Django, Flask, FastAPI, Node.js, RESTful APIs, Celery, Concurrent Processing
Frontend
React, Next.js, Three.js, TypeScript, Swift, Cross-platform Mobile
Real-Time
WebSocket Streaming, 60fps Processing, Sub-17ms Latency Architecture
Embedded/Hardware
Arduino, Teensy, C++, Multi-channel LED Control, Servo Systems, IoT
Cloud/DevOps
AWS SageMaker, Vercel, Railway, Docker, Git/GitHub, CI/CD
Key Achievements
Contact
Available for consulting, contract work, and full-time opportunities.