February 18, 2025

Interpretable ADHD Models for Clinician Adoption

How we partnered with clinical researchers to balance predictive accuracy, fairness, and narrative insight for ADHD assessment tooling.

Healthcare AIExplainabilityResearch

Partnership Principles

Clinicians ask: "Can I trust this model for my patient?" We answered by pairing metrics with narratives.

  1. Transparent features — start with psychometric scales clinicians recognize.
  2. Fairness audits — flag demographic skews before tuning hyper-parameters.
  3. Narrative-ready outputs — convert SHAP attributions into short takeaway paragraphs.

Model Stack

python
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler
from sklearn.linear_model import LogisticRegression

pipeline = Pipeline([
    ("scale", StandardScaler()),
    ("clf", LogisticRegression(class_weight="balanced"))
])

This baseline hit 0.82 F1 while staying explainable. We layered XGBoost later but kept the explainer contract identical.

Narrative Template

The model sees elevated inattention indicators (0.67 SHAP) and behavioral feedback from caregivers (0.54). Tailor the intervention toward sustained attention routines and caregiver coaching.

Consistency made adoption painless.