Edge Artificial Intelligence Co Processors for Multimodal Sensor Fusion
Keywords:
Edge AI, sensor fusion, co-processor design, multimodal inferenceAbstract
The rise of intelligent edge applications from autonomous vehicles to wearable health monitors has driven the demand for efficient on-device processing of complex, heterogeneous sensory inputs. Multimodal sensor fusion combines data from diverse sources such as vision, audio, inertial, and environmental sensors to enable robust perception and decision-making. However, processing these multimodal signals in real time with high energy efficiency poses a major challenge for edge computing platforms. This paper presents the design and implementation of Edge AI Co Processors specifically optimized for multimodal sensor fusion, combining neural acceleration with adaptive data path control. The proposed EACP architecture consists of domain-specific compute engines, dynamic dataflow interconnects, and a lightweight fusion scheduler optimized for low-latency inference. A hybrid neural architecture is employed: convolutional networks process image frames, temporal convolutional or RNN modules handle sequential data, and a fusion block integrates modality features using attention mechanisms. The co-processor is implemented on a heterogeneous SoC, integrating RISC-V cores with tightly coupled tensor units and programmable sensor interfaces.
Downloads
Published
Issue
Section
License

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
