Nathan Hubens
AI researcher in neural network compression · Creator of FasterAI Labs · Smaller and faster models in production
I help teams turn heavy research models into compact and production-ready systems, using pruning, quantization and distillation tailored to their hardware (Jetson, Raspberry Pi, microcontrollers).
Through FasterAI Labs, I provide consulting and open-source tools for structured pipeline audits, implementation of compression techniques and reproducible benchmarks that quantify accuracy and performance trade-offs.
Current focus: 2:4 sparsity and low-bit quantization on embedded GPUs and MCUs.