Lead ML Engineer.
Production ML systems with the research discipline to back them up.
I'm Oliver — a Machine Learning Engineer & Researcher based in Boone, NC, with a dual focus on empirical research and production deployment. I recently graduated with a B.S. in Computer Science from Appalachian State (minor in Mathematics, Data Science certificate), and I'm now full-time as the Lead ML Engineer at a stealth, pre-launch startup I joined during my senior year. My research interests sit at the intersection of multi-task learning, reinforcement learning theory, and NLP / language modeling.
I joined the startup as a senior-year intern after my capstone — an ML harmonization system and real-time inference dashboard — became the initial production system the team built on. Today I'm the sole MLE on a 10-person team (founder + 8 SWEs), owning all ML architecture, research, and production decisions in direct collaboration with the founder ahead of launch.
How I work
I build deep learning systems from first principles, and I prefer evidence over opinions. I'll implement models from scratch rather than take a library for granted, run ablations rather than trust defaults, and sit with results long enough to understand why something worked before shipping it. Most of my time goes to NLP, representation learning, and applied deep learning — and I genuinely enjoy explaining what's going on, in code, in diagrams, or in prose.
What I care about is the feedback loop between theory and implementation: staying close to the math, designing controlled experiments, and letting the results pick the next step. That's how I ended up with LexiMind — a 272M-parameter encoder-decoder transformer I built from scratch (Pre-LN, RMSNorm, T5 relative position bias, FlashAttention, gated-GELU FFN, KV-cache) with explicit FLAN-T5-base weight mapping, then jointly trained on summarization, topic classification, and multi-label emotion detection. The interesting part wasn't the scale, it was diagnosing negative transfer in naive MTL and designing two targeted interventions that lifted emotion-detection sample-averaged F1 from 0.199 to 0.352.
Coursework & self-study
A lot of my depth in ML, DL, and RL comes from deliberate self-study on top of my university curriculum. I work through Stanford courses end-to-end — full lectures, written notes, problem sets, and exams — to stay honest about what I actually understand.
Stanford (self-directed, lectures + exams + notes): CS229 Machine Learning · CS230 Deep Learning · CS336 Language Modeling from Scratch · CS224R Deep Reinforcement Learning · CME295/296 Transformers & LLMs.
Appalachian State: Applied Machine Learning · Advanced Reinforcement Learning · Numerical Methods · Computational Mathematics · Statistical Data Analysis · Linear Algebra · Theoretical Computer Science · Data Structures & Algorithms.
Other: DeepLearning.AI Machine Learning Specialization (Andrew Ng) — see certifications. Ongoing reading list leans on classic ML papers and the modern LLM / post-training literature.
Notable academic projects
Formal language interpreter with lambda calculus and type inference (Haskell) · pipelined CPU simulator (C/C++) · 2D game engine (Java). My senior capstone — the ML harmonization system and inference dashboard — is now running in production at the startup and formed the foundation of my full-time role.
Reach out
I'm heads-down at the startup, but I always like talking to people working on hard ML problems — production systems, post-training, RL, language modeling, the works. If that's you, get in touch.