ARM has enhanced its Cortex-M architecture processors with neural network processing instructions targeted at IoT edge-device applications. This will give edge-devices machine-learning (ML) capability to recognise a limited subset of spoken words or phrases without recourse to Cloud resources or powerful servers. This capability results in systems with better data security, reduced network energy needs, latency and bandwidth.

These MVEs (M-Profile Vector Extensions) have been introduced under the ‘Helium’ name and operate in a way similar to the Neon SIMD (single-instruction multiple-data) extensions to the firm’s Cortex-A cores. Helium has been designed from the ground-up specifically to provide efficient signal processing performance for small processors while giving many new architectural features and delivering significant performance boost for machine learning (ML) and digital signal processing (DSP) applications.

Along with the standard 32bit Armv8-M instructions there is support for fixed-length 128-bit vector processing and increased arithmetic provision. The company claim that Helium can deliver up to 15x more ML performance and up to five-fold boost in signal processing for its future Arm Cortex-M processors.