Building tensorflow neural network library in Windows environment

Pinterest LinkedIn Tumblr

The development and testing of convolutional neural networks (CNNs) in a Python environment is currently not difficult. 

There are many ready-made frameworks available for implementing CNN of any structure. Difficulties usually arise when you need to implement CNN outside of Python, for example, in C ++, in a program to run in the Windows environment. Usually, you need to implement the prediction functionality from an already trained model.

So, we created a tensorflow or keras model, trained and saved it (usually in hdf5 format). Next, we want to write the predict function in C ++ on Windows 7 x64 or higher, load the model, some data, and get the network response. 

There are a lot of similar examples in C ++, but tensorflow libraries are required to build any of them. This article describes the steps for building tensorflow libraries version 1.10.0 using CMake.

System requirements

Windows 7 x64 or higher, 16Gb RAM, 128Gb SSD (HDD of course will work too, but it will be slower to build)

Installation steps

Visual Studio
Install Microsoft Visual Studio Community 2017 , select only the Desktop development with c ++ option and add the VC ++ 2015.3 v14.00 (v140) toolset for desktop component in it. Python environment Download and install the Anaconda package . It is a fully customized Python along with a pre-installed set of the most popular modules. During the installation process, do not forget to check the box to add the package to the system PATH variable.

CMake, Git, SWIG
Download and install CMake for Windows . Add the installation path to PATH.
Download and install Git for Windows . Add the installation path to PATH.
Download and install SWIG . This is a package that provides the ability to call functions written in some languages ​​from code in other languages CUDA Optional. Version 9.0 has been tested

Run the environment preparation script C: \ Program Files (x86) \ Microsoft Visual Studio \ 2017 \ Community \ VC \ Auxiliary \ Build \ vcvars64.bat


Create a build directory and download the TensorFlow

repository C: \ src cd C: \ src git clone https://github.com/tensorflow/tensorflow.git cd C: \ src \ tensorflow git checkout r1.10 ren C: \ src \ tensorflow C: \ src \ tensorflow.1.10.0.cpu cd C: \ src \ tensorflow.1.10.0.cpu \ tensorflow \ contrib \ cmake mkdir build cd build

Create a configure.bat configuration script and put the following cmake lines there

.. -A x64 -T host = x64 ^
-DPYTHON_LIBRARIES = PATH \ TO \ ANACONDA \ libs \ python35.lib ^
-Dtensorflow_ENABLE_GRPC_SUPPORT the ON = ^
-Dtensorflow_BUILD_SHARED_LIB the ON = ^
-Dtensorflow_ENABLE_GPU = OFF ^
-Dtensorflow_WIN_CPU_SIMD_OPTIONS = / arch: AVX

If you need support for CUDA add:

-DCUDNN_HOME = “the C: \ Program Files \ the NVIDIA the GPU Computing Toolkit \ CUDA \ v9.0 “^
-DCUDA_HOST_COMPILER =” C: / Program Files (x86) / Microsoft Visual Studio 14.0 / VC / bin / amd64 / cl.exe “

A few notes on configuration:

  1. tensorflow_ENABLE_GRPC_SUPPORT in fact we do not need, but without it, my project did not want to be assembled.
  2. tensorflow_BUILD_SHARED_LIB needs to be enabled because our goal is to get the DLL library
  3. tensorflow_ENABLE_GPU – if enabled, then you need to install the CUDA Development Tools package (I compiled with version 9.0) and the project will take twice as long to build.
  4. tensorflow_WIN_CPU_SIMD_OPTIONS – flag for using new sets of instructions. This flag must be displayed carefully. If you install AVX2, the build will not work on processors where this instruction set is not available.
  5. There have been several attempts to use the Intel MKL libraries, which are optimized algorithms for convolutional networks. Oddly enough, the inclusion of these libraries slowed down the models by almost half. It may well be that I missed something, but decided to remove them from the configuration.

After running this configuration file, all files for the build will be generated.

Script for building

cd C: \ src \ tensorflow.1.10.0.cpu \ tensorflow \ contrib \ cmake \ build
Create a configuration script build.bat and put the following lines there

“C: \ Program Files (x86) \ Microsoft Visual Studio \ 2017 \ Community \ MSBuild \ 15.0 \ Bin \ MSBuild.exe ”^
/ m: 1 ^
/ p: CL_MPCount = 1 ^
/ p: Configuration = Release ^
/ p: Platform = x64 ^
/ p: PreferredToolArchitecture = x64 ALL_BUILD.vcxproj ^
/ filelogger

If you have a multi-core processor, you can set the / m: 3 and / p: CL_MPCount = 3 options to speed up the build process.

After starting the script, the assembly process begins and you can safely go for a walk for a couple of hours. If you plan to build libraries using CUDA, you can walk for four hours.

After the end of the build process, a directory like this C: \ src \ tensorflow \ v1.10.0.cpu \ tensorflow \ contrib \ cmake \ build \ Release will appear   in which you can find our assembled libraries tensorflow.dll, tensorflow.lib, tensorflow.def

In conclusion, I want to add that all the above actions are considered by the tensorflow developers to be unofficial, and in the future, it is likely that the ability to build via CMake will not be possible. However, at the moment, this is the only way to build your tensorflow library.

Write A Comment