by Dan Mandell | 04/19/2019
The embedded processor landscape is undergoing a dramatic transformation that is redefining the bounds of discrete processing. While the global markets for discrete embedded MPUs, GPUs, DSPs, and FPGAs push on, other processing solutions are absorbing the majority of new socket opportunities. Embedded processor suppliers and IP providers have been forced to adapt their product families and offerings to keep pace with mounting requirements in both traditional and emerging industry applications. Discrete processing – the use of dedicated hardware for specific workloads or functionality – is not what it once was.
In the past, the “swimming lanes” between different types of processors were clearly defined and their roles well understood. However, as heterogeneous computing spread in the form of increasingly diversified and optimized SoCs, traditional discrete processors took the backseat. More recently, the endless thirst for more performance and throughput spawned new markets for a variety of dedicated accelerators and radio-centric solutions. The landscape is further complicated by growing requirements for adherence to industry and safety-critical standards.
The supplier community has responded in a variety of ways. First, the embedded SoC space has matured to a point where many product families are dedicated to specific types of systems and deployments.
A flexible SoC architecture can also negate the need for traditional discrete processors while helping manage SKU bloat, cost, and software preservation. For example, on AMD’s new R1000 APUs, applications can turn off the embedded graphics core and other resources to run at lower power levels. Leading FPGA technology providers like Intel and Xilinx have also adopted SoC architectures to expand the RF and media/vision processing capabilities and overall programmable logic of their hardware.
AI and machine learning are driving the expansion of the market for dedicated accelerator processors. Acceleration is needed everywhere – from centralized datacenters to the network edge and field devices. This is translating into demand for hardware acceleration on a variety of processor footprints from MCUs through sophisticated multicore SoCs. Semiconductor IP providers have helped lead the charge. Arm launched its first machine learning processor IP in 2018, emphasizing four key attributes: static scheduling, efficient convolutions, bandwidth reduction mechanisms, and programmability/flexibility. Synopsys augmented its DesignWare IP portfolio for the development of new AI SoCs with new IP for specialized processing, memory performance, and real-time data connectivity. Hardware suppliers themselves have also rolled out new accelerated processing solutions for AI/ML and vision processing as well as cryptographic/security acceleration including AppliedMicro, Cavium, Marvell, NXP, STMicroelectronics, Texas Instruments and new entrants such as Wave Computing.
While embedded processors have evolved considerably through the past couple of decades, so too has the concept of discrete/dedicated computing. The lines have blurred with growing capabilities and marketing jargon, but a new generation of discrete processors is emerging to tackle specific workloads and engineering challenges. The market for this new generation of processors is still just beginning to build traction but will ultimately be an unstoppable force riding the growing demands and requirements across the gamut of embedded applications.
VDC Research’s annual IoT, Embedded, & Mobile Processors market research study will be publishing in Q3. View the 2019 IoT & Embedded Technology Research Outline to learn more about our coverage.