What does the Nvidia deal for ARM mean for the internet of things?

Source: staceyoniot.com

Given the smoke from wildfires smothering the West Coast, I’ve been fighting a three-day migraine. And yet every time I fall asleep I dream about Nvidia’s decision to purchase ARM for $40 billion, which was formally announced Sunday. (The Wall Street Journal had disclosed the deal and purchase price on Friday.) I didn’t love the deal when it was speculated and even after a lot of thought I still don’t.

I think that’s because this deal is about the data center and the overarching consolidation in the chip sector as opposed to something deeply meaningful for the internet of things. When Softbank announced that it would purchase ARM in 2016, it was because Softbank CEO Masayoshi Son was convinced that the internet of things would become the seeds of a new technological revolution and that we would see a trillion connected devices in the coming two decades.

But this deal feels like it’s about the data center, specifically how to help Nvidia eliminate or seriously constrain Intel’s headway in the chips inside the servers that host the internet, our mobile apps, and pretty much everything else out there. Nvidia has long been eyeing the space. As far back as 2008, it was looking at a combination of GPU and ARM-based processor for handling some of the graphics-heavy components of corporate computing and also pushing its GPUs as accelerators inside supercomputers.

With ARM, it can push much harder with a processor design for the data center that can really rival an x86 design. ARM’s been working on this for a decade and it has begun to make headway. The biggest sign that ARM might be able to handle general-purpose computing is the upcoming launch of new MacBooks from Apple that will be based on an ARM processor design. Many companies that are building ARM designs for the data center are doing two things — they are focused on a system-wide architecture that puts many ARM cores in parallel and designing a networking software layer to optimize how these cores talk to each other and to the memory inside the server.

This is remarkably similar to how Nvidia designs its graphics cards and computers that use GPUs for processing, so designers from both companies will have a common basis in understanding and things to build upon. The biggest defining feature of GPUs is that they are power-hungry. Yes, they are more efficient even at higher power consumption for doing loads like machine learning, but the overall power consumption associated with training and running neural networks is significant.

So, it’s possible that ARM’s focus on efficiency and doing as much computation on as low power footprint as possible could influence Nvidia’s design decisions and help build processors for machine learning that won’t require a new power plant to keep up with demand. Inside the data center think of this deal as a way to give Nvidia independence from Intel, AMD, and other x86 architectures and potentially a way to bring the power consumption of data-center computing down.

But Stacey, where is the internet of things in all of this? It’s at the edge. Not at the sensor level, where ARM-designed microcontrollers have a lot of market share, but inside the gateways and other edge-computing devices where customers are asking for data analytics and real-time decision-making. Nvidia has a foothold in a small portion of this market with its Jetson AI products. These devices are very popular in the industrial IoT where developers are using them to run machine learning models for a variety of use cases. These use cases can include visual models such as flare detection or more complex models designed at optimizing product flow inside a factory.

ARM has been going beyond cell phones with its fancy A-class of processor designs for gateways inside factories, cell phone networks, and routers for a few years now. Combining the computing power (in terms of performance, not in terms of energy demand) with Nvidia’s parallel processing could create extremely powerful and efficient gateways for industrial IoT, for modeling the intricacies of 5G networks and more. Intel has a large footprint here, but ARM has been pushing hard.  This could open a new market for Nvidia, and it’s one where its machine learning skills would come in handy.

So I can see why Nvidia would want some of ARM, but I can’t really see why Nvidia would want all of ARM. ARM has a large business in designing microcontrollers, counts its designs in almost all of the cell phones sold in the world, and even has a graphics business in Mali that provides graphics for smart TVs, wearables, and cell phones.  Could Nvidia broaden its market and absorb all of this? Sure. But there’s the elephant in the room.

ARM doesn’t actually design entire chips (most chipmakers don’t actually make their chips, they design them and then send them to a fab to make the physical silicon); it designs part of a chip and then licenses that design out to other chipmakers who then tweak it or include the design as part of their overall products. ARM’s customers include companies such as Qualcomm, Apple, Amazon, Google, NXP, Nvidia, and more who use the ARM designs as a basis for their own silicon.

In the data center, if Nvidia does create a powerful Nvidia+ARM design then the dozens of other companies such as Marvel, Qualcomm, Ampere, etc. will probably wonder why they are paying a competitor to help outcompete them. Nvidia’s CEO Jensen Huang, in a conference call on Sunday, explained that Nvidia won’t compete with rival chipmakers because Nvidia doesn’t intend to get into the cell phone market. This is laughable since ARM’s growth plans are all about the data center, where Nvidia does plan to compete.

My hunch is that as this deal wends its way through regulatory approvals we’ll see protests from ARM customers and potentially regulators. Although I am unsure of the EU will lift a hand to help a UK based company after Brexit.  I don’t love the deal in part because I think it will upset a business model that has worked and led to a lot of innovation while also ensuring basic interoperability at the lowest layers of the computer, and I also don’t like it because it takes ARM’s focus off the internet of things.

What happens to ARM’s Pelion IoT platform, which it has so recently planned to spin out to Softbank? Will we see ARM attempt to split out its MCU business for the IoT? Even though plenty of smart devices also run the A-class processors? ARM has also invested in several businesses that are building better low-power radios and energy harvesting chips, which are essential for the IoT. Will Nvidia continue that? Is Nvidia ready to become a broad chip company that encompasses far more than graphics? It failed when it tried to get into the wireless business, and the jury is still out on its purchase of networking company Mellanox.

I’ll briefly say that the deal and potential upset could lead to a boost for the up-and-coming open-source RISC-V architecture that many chip companies are currently experimenting with as a way to reduce their reliance on ARM and its license fees. I’m going to talk to more people about this because if ARM isn’t going to focus on the IoT, I think we’ll see other companies prepare to step up and take on that opportunity.

Related Posts

Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x
Artificial Intelligence