AI Inference at the Rugged Edge: Meeting Edge AI Performance with M.2 Accelerators
This paper explores the benefits of domain specific architectures, specifically ones using the M.2 form factor, that are designed to tackle very specific and demanding deep learning and inference workloads at the edge without exceeding total cost of ownership.