Adaptive Memory Architectures for Brain-Inspired and AI Workloads
Main content start
Gain cell RAM-based Mamba-like Model Accelerator
Project explores the co-design of hardware and software to optimize the use of gain-cell RAM as on-chip buffer in AI accelerators. Gain-cell RAM offers higher density and lower energy compared to traditional SRAM, but suffers from short and variable retention times. Our work focuses on identifying data lifetime patterns in AI models in Mamba-style state space models, and dynamically tailoring refresh policies and data mapping strategies accordingly.
Knowledge Retrieval System Using FeFET-based Nanodendrite Network
Project studies the use of multi-gate FeFET-based nanodendrite devices as the entity of knowledge decoding networks, while accelerating the decoding process by co-designing the dendritic computing algorithms.