Augment Big Data Strategy with Advanced Analytics

12Jun - by aiuniverse - 0 - In Big Data

Source: packagingstrategies.com

The debate between cloud and edge computing strategies remains a point of contention for many controls engineers in the packaging industry. However, most agree that smart factories in an Industry 4.0 context must efficiently collect, visualize and analyze data from machines and production lines to enhance equipment performance and production processes. Using advanced analytics algorithms, companies can sift through this mass of information, or Big Data, to identify areas for improvement.

To some, edge computing devices may seem to create an unnecessary step when all data can simply be handled in the cloud. Microsoft Azure, Amazon Web Services (AWS) and other cloud platforms offer limitless space for this purpose. Moreover, MQTT encryption and data security built into the OPC UA protocol ensure that all data remain secure while in transport. When it comes to analytics and simple data management, however, edge computing presents important advantages to closely monitor equipment health and maximize uptime in production.

Because of the massive amount of data that modern machines can produce, bandwidth can severely limit cloud computing or push costs to unacceptable levels. New software solutions for PC-based controllers, such as TwinCAT Analytics from Beckhoff, allow controls engineers to leverage advanced algorithms locally in addition to data pre-processing and compression. As a result, a key advance in analytical information is the concept to process data on the edge first, which enables individual packaging machines and lines to identify inefficiencies on their own and make improvements before using the cloud for further analysis across the enterprise.

Bandwidth Burdens when Streaming Machine Data

Performing Big Data analytics in the cloud exclusively often proves expensive in terms of storage space. However, the more difficult proposition is first getting your data there. Managing bandwidth can create a serious issue for factories, since the average Ethernet connection speed across the globe is 7.2 Mbps, according to the most recent connectivity report from Akamai.

When one machine sends data to the cloud, much less multiple machines, little to no bandwidth is available for the rest of the operation. Two use cases published in a 2017 article by Kloepfer, Koch, Bartel and Friedmann illustrate this point. In the first, the structural dynamics of wind turbines using 50 sensors at a 100-hertz sampling rate required 2.8 Mbps bandwidth for standard JavaScript Object Notation (JSON) to stream all data to the cloud. The second case, condition monitoring of assets in intralogistics, used 20 sensors at a 1,000 hertz sampling rate and required 11.4 Mbps JSON. This is quite a relevant test as JSON is a common format to send data to the cloud or across the web.

Without compression or pre-processing mechanisms, an average 7.2 Mbps Internet connection can’t stream data from three or more large machines that require advanced measurement, condition monitoring and traceability of production. A factory must use a connection that is much larger than normal or multiple connections, or it can leverage advanced analytics on the edge.

Edge Devices and Advanced Algorithms

In the past, most programmable logic controllers (PLCs) were capable of handling repetitive tasks in machines, but possessed the computing prowess of a smart toaster. Today’s Industrial PCs (IPCs) feature ample storage and powerhouse processors, such as the Intel® Core™ i7 or Intel® Xeon® offerings, with four or as many as 40 cores. TwinCAT 3 automation software, for example, offers a complete IPC platform that runs alongside Windows, easily supports third-party applications and enables remote access. Most importantly, PC-based control software can provide advanced algorithms to manage data, such as pre-processing, compression, measurement and condition monitoring. This doesn’t require a separate, stand-alone software platform.

Condition monitoring performs many operations locally, such as converting raw accelerometer data into the frequency domain. This can be done on an edge device or within the actual machine controllers’ PLC program. When analyzing vibration, for example, the information is often collected as a 0-10 volt or 4-20 mA signal. This can be changed to a more usable format on the controller through a Fast Fourier Transform (FFT) algorithm. More extensive evaluations of machine vibrations are possible using DIN ISO 10816-3. To monitor bearing life and other specific components, algorithms are readily available to add to a PLC program for calculating the envelope spectrum first and then the power spectrum. Many common machine conditions and predictive maintenance algorithms can be evaluated within the machine control, or on an edge device.

To optimize a Big Data strategy from the ground level upwards, automation software should offer built-in algorithms to process both deterministic and stochastic data. If the data is deterministic, controllers using pre-processing algorithms could send certain values only upon a change, so the recipient should know the mathematic correlation and be able to reconstruct the original signal if desired. For stochastic data, the controller can send statistical information, such as the average value. Although the original signal is unknown, the recipient can still use compressed, statistical information.

It also is possible to implement algorithms on the IPC to monitor process data over a set sequence. This includes writing input data periodically, according to a configured number of learned points, to a file or to a database. After storing standard values, such as torque for a motion operation, algorithms compare cycle values against them. Ensuring the data are within a configured bandwidth creates a type of process window monitoring, which can readjust immediately since the local controller reacts in real-time.

Big Data on Both Edge and Cloud

Running advanced algorithms on a local edge device reduces cloud bandwidth requirements and offers an efficient solution for process optimization guided by Big Data. However, that does not mean a packaging plant should disconnect from the cloud. In the age of IIoT, it is essential to gather and easily access data across an operation, even if many analysis and decision-making tasks can be completed on local hardware first.

To decide what needs to be sent to the cloud and what can be processed or pre-processed locally, make sure to ask a few key questions. First, what are the goals your operation wants to achieve through data acquisition in this instance? Next, which data sets from which machines need to be analyzed in order to achieve these goals? Finally, what types of data insights does the operation need to improve efficiency and profitability?

Local monitoring with edge computing often works most efficiently to improve the operation of individual machines. However, the cloud provides the best platform to compare separate machines, production lines or manufacturing sites against each other. Implementing both allows an operation to maximize its Big Data strategies.

Facebook Comments