Jubin Choi¹, David Keetae Park², Junbeom Kwon³, Shinjae Yoo², Jiook Cha¹ ⁴ ⁵ ⁶* ¹ ⁴ ⁵ ⁶ Seoul National University, ²Brookhaven National Laboratory, ³University of Texas at Austin,

<aside> 🔗

Quick Links

📄 Read the Paper | 🧠 Connectome Lab | 🌐 Jubin Choi (LinkedIn)

</aside>

⚡️ TL;DR

NeuroMamba is a foundation model that applies Direct 4D Sequence Modeling to whole-brain fMRI. Unlike prior methods that either use 4D whole brain (processing empty background) or use ROI-based input (losing detail), NeuroMamba uses Adaptive Background Removal and a Mamba backbone to achieve 46.5% FLOPs reduction while hitting SOTA accuracy.

🗓️ Workshop Schedule (Upper Level Room 24ABC):
🗣️ 2:00 PM: Spotlight Talk (Session II: Modeling Physiological Signals)
📊 3:45 PM: Poster Session

🧐 The Problem: Losing Detail vs. Wasting Compute

fig1_workshop.png

The Challenge Current fMRI Foundation Models suffer from a trade-off:

  1. ROI-based models (e.g., BrainLM): Efficient, but discard fine-grained spatial info.
  2. Hierarchical models (e.g., SwiFT): Keep spatial info, but process ~60% empty background noise, leading to massive inefficiency.

The Solution NeuroMamba breaks this trade-off by treating 4D fMRI as a unified sequence and removing non-brain tokens entirely from the input.


🛠️ Methodology: How It Works

figure2.png

Deep Dive: Architecture & Pipeline


📈 Key Results

  1. Neural Scaling Laws

    figure3.png

    We validated that fMRI modeling follows neural scaling laws. Validation loss decreases predictably with compute and parameter size (up to 5.4M params), confirming NeuroMamba is a scalable architecture.

  2. SOTA Performance

    Model Parameters Data Split AUROC Accuracy
    SwiFT 4.6M 70/15/15 98.0 92.9
    NeuroSTORM 5.0M 70/15/15 97.6 93.3
    NeuroMamba 3.1M 80/10/10 98.7 94.9

📥 Resources

(Use a Sync Block or simple list)