abstract = "The rapid development of edge AI applications has led to the demand for high-performance, power-efficient embedded computing platforms. This paper evaluates the co...
This section provides a concise summary of the performance evaluation results for the investigated object detection models across various edge computing platforms. The 3D plots in ...
Some CPUs have introduced a level of parallelism through, for example, vector extensions; however, this is negligible compared to a naturally parallel processor like a GPU or an NP...
Developers can now achieve more than 60% better energy efficiency compared to GPU based devices · resulting in greener edge devices. Edge AI devices can also inclu...
In this work, we investigate the inference time of the MobileNet family, EfficientNet V1 and V2 family, VGG models, Resnet family, and InceptionV3 on four edge platforms. Specifica...
We show that Google platforms offer the fastest average inference time, especially for newer models like MobileNet or EfficientNet family, while Intel Neural Stick is the most univ...
Sign Up To Try Advanced Features
Get more accurate answers with Super Pandi, upload files, personalized discovery feed, save searches and contribute to the PandiPedia.