The Future of Machine Learning at the Edge
Source – datacenterknowledge.com
While AI and machine learning are often used interchangeably, machine learning is simply a way of achieving AI. Machine learning at its core is the ability of a machine or system to automatically learn and improve its operation or functions without human input, which is an essential element of any AI.
Now, where all this intelligent learning and processing happens is another story. As the demand for data brought on by technologies like machine learning and AI is growing exponentially, it is pushing IT environments toward a decentralized hybrid computing ecosystem.
Use Cases for the Intelligent Edge
While the majority of machine learning technologies are being hosted in remote cloud data centers, there is a shift happening toward the edge. Businesses are finding that with certain applications, it makes more sense to apply machine learning at the network edge rather than connect back to the cloud. The edge is advantageous for machine learning for a number of reasons, but a key benefit is minimized latency, which leads to faster data processing and real time, automated decision-making.
For example, a content provider may use machine learning applications to understand what viewers in a specific city are currently watching so they can cache the content locally at the edge to improve the viewing experience and lower their operating cost. By running the machine algorithm locally, it minimizes incurred latency and enables the algorithm to learn in real-time as it was designed to do.
In the case of self-driving cars, machine learning applications are being trained both locally in the car itself and at the edge to cut back on bandwidth and latency to process data, which can rack up to about 4,000 GB a day – equivalent to 3 billion peoples’ worth of data, in real-time. Not to mention the life-safety factor required; the ability for these vehicles to process data instantly is critical and can be life-saving as automated decision-making based on road conditions or unexpected instances can keep passengers out of harm’s way.
With data processing, proximity to the network matters, so it’s natural to see a progression toward the edge in order to capture and analyze data on the spot.
Machine Learning at the Edge: Best Practices
As businesses look to deploy machine learning at the edge, there are few key factors to consider.
- Establish objectives. It’s important to first establish the business’s objectives for leveraging machine learning in order to determine what data sources and edge technology solutions are required to support those goals.
- Identify “the question.” In order to establish business objectives for machine learning, businesses need to identify the question they are trying to answer. For example, “What three drivers are negatively impacting customer satisfaction?” It’s important that the question be pointed, identifiable and can ultimately translate into some statistical process in order to derive meaningful answers from the massive amount of data that’s being collected.
- Gain compute, network and storage baselines. Businesses also need to consider that because machine learning algorithms are designed to ingest copious realms of data as they consume an enormous amount of computational power. As such, understanding what the compute requirements are and then building out edge technology from a network, cooling and storage capacity will be critical in order to adequately handle the workload demand.
Machine learning will continue to enhance business decisions as the algorithms improve exponentially as time goes on. Processing and analyzing will increasingly occur wherever it is best suited for any given application – and in many cases that will be at the edge. As the fusion of machine learning and edge continues to evolve, we’ll see it drive business efficiency, automation, predictive capabilities and decision-making on a greater scale.