A Quantitative Comparison for Image Recognition on Accelerated Heterogeneous Cloud Infrastructures

Authored by: D. Danopoulos , C. Kachris , D. Soudris

Heterogeneous Computing Architectures

Print publication date:  September  2019
Online publication date:  September  2019

Print ISBN: 9780367023447
eBook ISBN: 9780429399602
Adobe ISBN:


 Download Chapter



Modern real world applications in machine learning like visual or speech recognition have become one of the most computationally intensive applications for a wide variety of fields. Specifically deep learning has gained significant traction due to the high accuracies offered for classification but with the cost of network complexity and compute workload. Recent works have revealed that the domain of deep neural networks is crossing from embedded systems to data centers. Hence, it has been a race between CPU, GPU and FPGA vendors to offer high performance platforms that are not only fast but also efficient as the energy footprint in the large data centers operating today will be the trade-off for the raw computer power of these platforms. Cloud computing services like Amazon AWS integrate and offer flexibly such modern compute platforms for all kind of tasks, thus facilitating further their development process. In this chapter we focus on accelerating image recognition on Amazon Compute Cloud, using the Caffe Deep Learning framework and comparing the results in terms of speed and accuracy between different high end devices (CPU, GPU and FPGA) and network models taking into account the operational cost of each platform.

Search for more...
Back to top

Use of cookies on this website

We are using cookies to provide statistics that help us give you the best experience of our site. You can find out more in our Privacy Policy. By continuing to use the site you are agreeing to our use of cookies.