This Guide gives you an overview of TensorFlow architecture which helps you in TensorFlow programming and think of what’s going on under the hood. You must have a solid understanding of its architecture to write high computational algorithms using the TensorFlow framework.

What is TensorFlow

TensorFlow is an open-source machine learning library that is developed by Google. Below are the 3 main requirements which are fulfilled by this machine learning framework.

  • Performance – This framework is able to handle very large about of data and it’s computation by using CPU, GPU or TPU units.
  • Flexibility – This framework has enough capability to implement on-going research and new powerful machine learning algorithms to implement state-of-art artificial intelligence techniques.
  • Portability – And It is well organized for continuous development for application scaling and deploy the application on various devices e.g mobiles, embedded devices, and a web server.

Core Concept of TensorFlow

The Core of TensorFlow is written in C++. The core concept is to separate the design of data workflow to the actual data. So first, we build a data flow graph and then stream the data into the graph. Each node in the graph represents a mathematical operation e.g Activation Method ReLu. And edges of the graph represent a multi-dimensional array. These multi-dimensional array are known as Tensor.

Throughout your development, Keep in mind that TensorFlow uses Graph concepts. Let’s dig into the main phases in a machine learning implementation using TensorFlow.

Load Data

The tf.data API is used to load the data and data pre-processing. We create data input pipelines to be used in the training of the model. These pipelines make it easy to handle large amounts of data without worrying about memory issues and these support many types of data formats. e.g Images, text.

Model Training

The tf.keras API is used to train the model, evaluate the performance of the model on the test data and build the model. This is a widely used high-level API in TensorFlow development and it’s very user-friendly as well. If you’re a python developer then use Keres.io for the same purpose.

The alternative of Keras is Estimators in the TensorFlow. These Estimators API empowers you to build complex machine learning models by utilizing low-level APIs of TensorFlow. You can use premade estimators e.g linear regression, logistic regression or write your own estimator.

Computing Device

Machine learning algorithms require complex computations so it needs high computation power. TensorFlow enables you to code the model from independent to the hardware used for the computation. You can use CPU for simple algorithms e.g linear regression or GPU for the large neural network for image recognition. You can switch from CPU to GPU if needed without changing any model training programming.

It’s super easy to distribute the training workload to multiple machines. e.g multiple CPU, GPU or TPU devices.

Deploy Model

Once a mode is ready, It needs to be deployed to be used in the real application. E.g if you’re building a prediction application then you can use the linear regression model in your application.

TensorFlow provides different deployment libraries according to the production environment. Below are the main deployment libraries:

  • TensorFlow Serving: It allows us to deploy the model on the cloud server and access the model using the REST API.
  • TensorFlow Lite: You can use this to deploy the model into mobile or embedded devices.
  • TensorFlow JS: It allows us to deploy the model on the webserver. e.g Node.js development environment.

So I have given you an easy to understand overview of Tensorflow which can motivate you to use TensorFlow as your preferred machine learning framework.