Jubin Choi¹, David Keetae Park², Junbeom Kwon³, Shinjae Yoo², Jiook Cha¹ ⁴ ⁵ ⁶* ¹ ⁴ ⁵ ⁶ Seoul National University, ²Brookhaven National Laboratory, ³University of Texas at Austin,
<aside> 🔗
Quick Links
📄 Read the Paper | 🧠 Connectome Lab | 🌐 Jubin Choi (LinkedIn)
</aside>
⚡️ TL;DR
NeuroMamba is a foundation model that applies Direct 4D Sequence Modeling to whole-brain fMRI. Unlike prior methods that either use 4D whole brain (processing empty background) or use ROI-based input (losing detail), NeuroMamba uses Adaptive Background Removal and a Mamba backbone to achieve 46.5% FLOPs reduction while hitting SOTA accuracy.
🗓️ Workshop Schedule (Upper Level Room 24ABC):
🗣️ 2:00 PM: Spotlight Talk (Session II: Modeling Physiological Signals)
📊 3:45 PM: Poster Session

The Challenge Current fMRI Foundation Models suffer from a trade-off:
The Solution NeuroMamba breaks this trade-off by treating 4D fMRI as a unified sequence and removing non-brain tokens entirely from the input.

Deep Dive: Architecture & Pipeline
- Input: 4D fMRI volumes divided into non-overlapping spatiotemporal patches.
- Adaptive Background Removal: We identify and discard non-brain tokens before processing. This handles variable-length inputs efficiently.
- Backbone: 12-layer Mamba2 architecture trained with Autoregressive Next-Token Prediction.
- Positional Encoding: NeRF-style encoding to handle continuous $(x, y, z, t)$ coordinates, ensuring robustness across subject variations.
Neural Scaling Laws

We validated that fMRI modeling follows neural scaling laws. Validation loss decreases predictably with compute and parameter size (up to 5.4M params), confirming NeuroMamba is a scalable architecture.
SOTA Performance
| Model | Parameters | Data Split | AUROC | Accuracy |
|---|---|---|---|---|
| SwiFT | 4.6M | 70/15/15 | 98.0 | 92.9 |
| NeuroSTORM | 5.0M | 70/15/15 | 97.6 | 93.3 |
| NeuroMamba | 3.1M | 80/10/10 | 98.7 | 94.9 |
(Use a Sync Block or simple list)