Deep Learning Architectures You Must Know

25Aug - by aiuniverse - 4 - In Data Science Deep Learning Machine Learning


Data analytics is amongst the hottest emerging technologies in recent times, with applications ranging from marketing to customer service to HR and beyond. However, ever since its inception a few years back, even the nature of analytics has changed a lot. Today, the focus is a lot on predictive modelling and complex machine learning and deep learning algorithms that have the potential to change the future of the enterprise. Much of this modern-day machine intelligence stems from deep learning architectures. Deep learning is nothing but a complex version of machine learning, which uses diverse and highly interconnected neural network models. Here is a look at some of the popular deep learning architectures that enterprises are already, and will continue to dig into, in the future.

  • AlexNet: This is the first deep architecture introduced Geoffrey Hinton and his colleagues, deep learning pioneers from the eighties. Much of the path-breaking research in the deep learning arena has happened on this architecture. Though it was introduced in the 1980s and is not a bit outdated, enterprises still use it for kicking off neural networking in areas like speech recognition and computer vision.
  • VGG Net: Introduced at the Visual Graphics Group at Oxford, it follows a pyramidal shape and has 19 layers. The architecture design comprises of convolutional layers followed by pooling layers. It has been used extensively to benchmark on a particular task. Another advantage is that pre-trained networks for VGG are available freely on the internet, making it highly accessible.
  • Google Net: Designed by Google researchers, it has been making an impact since 2014. Designed to have 22 layers, it was amongst the first deviations from the sequential architectures. This is because it consisted of “inception modules” stacked one over the other, allowing for joint training as well as parallel training i.e. faster training. This also makes it compact in size, and thereby space-efficient.
  • ResNet: Short form for Residual Networks, it is made up of multiple subsequent residual modules, stacked one upon other to create a network. Some of its innovative features are an initialization function which keeps training intact.
  • ResNeXt: This is the optimum model for object recognition. It derives from and improvises the concepts of inception and Resnet to create a highly improved deep learning architecture.

These are some of the contemporary architectures that are in use to create great machine-training interfaces. Data scientists and data architects must familiarize themselves with these and other emerging technologies in the field, so as to align the organizational tech strategy with the desired business outcome.

Facebook Comments

Comments are closed.