Managing Deep Learning Development Complexity
Source – nextplatform.com
For developers, deep learning systems are becoming more interactive and complex. From the building of more malleable datasets that can be iteratively augmented, to more dynamic models, to more continuous learning being built into neural networks, there is a greater need to manage the process from start to finish with lightweight tools.
“New training samples, human insights, and operation experiences can consistently emerge even after deployment. The ability of updating a model and tracking its changes thus becomes necessary,” says a team from Imperial College London that has developed a library to manage the iterations deep learning developers make across complex projects. “Developers have to spend massive development cycles on integrating components for building neural networks, managing model lifecycles, organizing data, and adjusting system parallelism.”
To better manage development, the team developed TensorLayer, an integrated development approach via a versatile Python library where all elements (operations, model lifecycles, parallel computation, failures) are abstracted in a modular format. These modules include one for managing neural network layers, another for models and their lifecycles, yet another to manage the dataset by providing a unified representation for all training data across all systems, and finally, a workflow module that addresses fault tolerance. As the name implies, TensorFlow is the core platform for training and inference, which feeds into MongoDB for storage—a common setup for deep learning research shops.
The team says that while existing tools like Keras ad TFLearn are useful they are not as extensible as they need to be as networks become more complex and iterative. They provide imperative abstractions to lower adoption barrier; but in turn mask the underlying engine from users. Though good for bootstrap, it becomes hard to tune and modify from the bottom, which is quite necessary in tackling many real-world problems.
Compared with Keras and TFLearn, TensorLayer provides not only the high level abstraction, but also an end-to-end workflow including data pre-processing, training, post-processing, serving modules and database management, which are all keys for developers building the entire system.
TensorLayer advocates a more flexible and composable paradigm: neural network libraries shall be used interchangeably with the native engine. This allows users to tap into the ease of pre-built modules without losing visibility. This noninvasive nature also makes it viable to consolidate with other TF’s wrappers such as TF-Slim and Keras. However, the team argues, flexibility does not sacrifice performance.
There are a number of applications the team highlights in the full paper, which also provides details about each of the modules, the overall architecture, and current developments. The applications include generative adversarial networks, deep reinforcement learning, hyperparameter tuning in end user context. TensorLayer has been also used for multi-model research, image transformation, and medical signal processing since its GitHub release last year.
TensorLayer is in an active development stage and has received numerous contributions from an open community. It has been widely used by researchers from Imperial College London, Carnegie Mellon University, Stanford University, Tsinghua University, UCLA, Linköping University and etc., as well as engineers from Google, Microsoft, Alibaba, Tencent, ReFULE4, Bloomberg and many others.