Edge AI first employed existing general-purpose processor architectures – CPUs and GPUs. However, as computing demands expand and multiply, their shortcomings – chiefly inefficiency in neural processing – become more and more apparent. Under space, power supply, heat dissipation, cost and/or deployment limitations, they provide insufficient performance, thereby limiting possible edge applications. Domain-specific AI processing has emerged to address this gap and is increasingly expanding what is possible at the edge. The game-changer AI processors offer significantly higher cost and power efficiency. However, processors and architectures vary in the relative performance and efficiency gains they offer over customers’ existing computing solutions, so it is important to use task-specific, actual measurement to evaluate and compare them.