back to Actors and Projects

Architecting More Than Moore – Wireless Plasticity for Heterogeneous Massive Computer Architectures

Start Date: 01-10-2019

End Date: 30-09-2022

Id: WiPLASH

CORDIS identification number: 863337

The main design principles in computer architecture have shifted from a monolithic scaling-driven approach towards an emergence of heterogeneous architectures that tightly co-integrate multiple specialized computing and memory units. This is motivated by the urgent need of very high parallelism and by energy constraints. This heterogeneous hardware specialization requires interconnection mechanisms that integrate the architecture. State-of-the-art approaches are 3D stacking and 2.D architectures complemented with a Network-on-Chip (NoC) to interconnect the components. However, such interconnects are fundamentally monolithic and rigid, and are unable to provide the efficiency and architectural flexibility required by current and future key ICT applications. The main challenge is to introduce diversification and specialization in heterogeneous processor architectures while ensuring their generality and scalability. In order to achieve this, the WiPLASH project aims to pioneer an on-chip wireless communication plane able to provide architectural plasticity, reconfigurability and adaptation to the application requirements with near-ASIC efficiency but without any loss of generality. For this, the WiPLASH consortium will provide solid experimental foundations of the key enablers of on-chip wireless communication at the functional unit level as well as their technological and architectural integration. The main goals are: (i) prototype a miniaturized and tunable graphene antenna in the terahertz band, (ii) co-integrate graphene RF components with submillimeter-wave transceivers and (iii) demonstrate low-power reconfigurable wireless chip-scale networks. The culminating goal is to demonstrate that the wireless plane offers the plasticity required by future computing platforms by improving at least one key application (mainly biologically-plausible deep learning architectures) by 10X in terms of execution speed and energy-delay product over a state-of-the-art baseline.