Microsoft Research in 2013 released this article that nobody got fired for buying a cluster. At that time, optimizations on CPU were already a very interesting point in computation.

Nowadays, it’s even more the case with GPU :

Google Brain

Benchmarks with BIDMach library show that main classification algorithms run on a single instance with a GPU are faster than on a cluster of hundred CPU instances with distributed technologies such as SPARK.

The new approach of deep learning:

Deep learning

Practical examples from NVIDIA :

Deep learning

The traditional approach of feature engineering :

Deep learning

where the main problem was to find the correct definition of features.

And the new deep learning approach :

Deep learning

is inspired by nature :

Deep learning

with the following advantages :

Deep learning

Here are my installs of Berkeley’s deep learning library Caffe and NVIDIA deep learning interactive interface DIGITS on NVIDIA GPU :

Installs on mobile phones :

Clusters remain very interesting for parsing and manipulating large files such as for example parsing Wikipedia pages with Spark.