Edge Artificial Intelligence Co Processors for Multimodal Sensor Fusion

Authors

  • Eudemonia Selfimona Department of Computer science, Italian University of Software Engineering Author

Keywords:

Edge AI, sensor fusion, co-processor design, multimodal inference

Abstract

The rise of intelligent edge applications from autonomous vehicles to wearable health monitors has driven the demand for efficient on-device processing of complex, heterogeneous sensory inputs. Multimodal sensor fusion combines data from diverse sources such as vision, audio, inertial, and environmental sensors to enable robust perception and decision-making. However, processing these multimodal signals in real time with high energy efficiency poses a major challenge for edge computing platforms. This paper presents the design and implementation of Edge AI Co Processors specifically optimized for multimodal sensor fusion, combining neural acceleration with adaptive data path control. The proposed EACP architecture consists of domain-specific compute engines, dynamic dataflow interconnects, and a lightweight fusion scheduler optimized for low-latency inference. A hybrid neural architecture is employed: convolutional networks process image frames, temporal convolutional or RNN modules handle sequential data, and a fusion block integrates modality features using attention mechanisms. The co-processor is implemented on a heterogeneous SoC, integrating RISC-V cores with tightly coupled tensor units and programmable sensor interfaces.

Downloads

Published

2026-01-15

Issue

Section

Articles

How to Cite

Edge Artificial Intelligence Co Processors for Multimodal Sensor Fusion (Eudemonia Selfimona, Trans.). (2026). Unique Journal of Artificial Intelligence, 4(1), 121-130. https://uniquespublisher.com/index.php/UJAI/article/view/32