TensorFlow comes as a library such as other deep learning libraries, Caffe, Theano, or Torch. I’ve already written a lot on Caffe before.

# The main advantage of Tensorflow

Such as Caffe, TensorFlow comes with an API in both Python and C++ languages.

Python is great for the tasks of experimenting and learning the parameters. These tasks may require tons of data manipulation before you get to the correct prediction model. Remember that 80% of the job of the data scientist is to prepare data, first clean it, then find the correct representation from which a model can learn. Learning comes last, as 20% of the work.

Tensorflow enables GPU computation which is also a very useful feature when it comes to learning, which requires usually from a few tens minutes to a few tens hours : learning time will be divided by 5 to 50 on GPU which makes playing with hyperparameters of the model very convenient.

Lastly, when the model has been learned, it has to be deployed in production apps, and usually, speed and optimizations become more important : the C++ API is the expected first choice for this purpose.

Tensorflow comes with Tensorboard, which looks like the DIGITS interface from NVIDIA (see my tutorial on Mac or on Ubuntu), but with much more flexibility on the events to display in the reports :

and a graph visualization tool, that works like our python python/draw_net.py command in Caffe, but with different informations (see tutorial)

I would say that Tensorflow brings nothing really new, but the main advantage is that everything is in one place, and easy to install, which is very nice.

Such as Theano, TensorFlow has taken the bet of symbolic computation which brings lots of flexibility to layer definitions and simplifies coding with automatic differentiation of equations for the back propagation (gradient descent). This enables creation of networks directly and integrally from the Python code (without having to write a ‘NetSpec’ interface such as in Caffe).

TensorBoard brings what DIGITS brings to Caffe and other deep learning libraries. Graph visualization brings informations about the operations that will occur in the processing, that no other library proposes.

A “best-in-class” work, but it can still be challenging to understand the real added-value compared to other tools.

A contrario, I would see these main drawbacks :

• if some needed operations are not available in the library, I cannot imagine how complex it can be to add them…

• TensorBoard does not simplify understanding of the network, is too much thorough, and display of some informations, such as the layer parameters, is missing compared to other tools … so I’m even more lost…

Caffe remains for me the main tool where R&D occurs, but I believe that Tensorflow will become greater and greater in the future.

Technical work done by Google is always very great.

# Install

Let’s install Tensorflow on an iMac :

#pip install https://storage.googleapis.com/tensorflow/mac/tensorflow-0.5.0-py2-none-any.whl


You will need protobuf version above 3.0 (otherwise you’ll get a TypeError: __init__() got an unexpected keyword argument 'syntax') :

brew uninstall protobuf
brew install --devel --build-from-source --with-python -vd protobuf


--devel options will enable to install version ‘protobuf>=3.0.0a3’.

# How it works : a graph with inputs and variable data

As in Theano, the code you write is a symbolic abstraction : it describes operations, and operations belong to a connected graph, with inputs and outputs.

The first thing to do is to initialize variables in which to store the data for the net. Initialization is performed with the following operations :

A session is created, in which all variables are stored. The session enables communication with the processor (CPU, GPU).

# Add neural network operations to the graph

With mnist_softmax.py shown just below, consisting in adding symbolic operations on the data, tf.matmul (matrix multiplication), + (tensor addition), tf.nn.sofmax (softmax function), reduce_sum (sum) and minimize (minimization with GradientDescentOptimizer) :

Which gives an accuracy of ~0.91.

When it comes to the small convolutional network example :

the training accuracy consolidates above 0.98 after 8000 iterations, and the test accuracy closes at 0.9909 after 20 000 iterations.

Let’s try the feed forward neural network defined in fully_connected_feed.py :

Which gives the following results

Training Data Eval:
Num examples: 55000  Num correct: 49365  Precision @ 1: 0.8975
Validation Data Eval:
Num examples: 5000  Num correct: 4530  Precision @ 1: 0.9060
Test Data Eval:
Num examples: 10000  Num correct: 9027  Precision @ 1: 0.9027


And in Tensorboard (at http://localhost:6006/)

Have a look at my tutorial about symbolic programming in Tensorflow or Theano.

Well done!