One Blood Draw. Six Answers. An AI Model That Spots Dementia Before You Feel It.
Key Findings
Most blood tests look for one thing. ProtAIDe-Dx looks for six simultaneously. From a single tube of plasma, the model distinguishes between Alzheimer's disease, Parkinson's disease, frontotemporal dementia (FTD), ALS, stroke/TIA, and cognitively healthy controls.
That multi-disease approach matters because these conditions share symptoms, especially early on. Memory problems, personality changes, motor issues. Getting the right label usually takes years of specialist visits, expensive imaging, and sometimes lumbar punctures. This model does its sorting with proteins floating in your blood.
The model's balanced classification accuracy hit 95% for ALS and 92% for Parkinson's. Alzheimer's and FTD came in at the 70-80% range. All conditions scored above 0.8 AUC (the standard measure of how well a test separates sick from healthy) except stroke/TIA.
It beat every baseline the team threw at it: Random Forest, XGBoost, and the newer TabPFN model. The performance held up on an external validation cohort of 1,786 people from BioFINDER-2, a well-known Swedish research study.
In people who were cognitively normal at the time of their blood draw, the model predicted who would later develop cognitive decline with an AUC of 0.74. Not perfect. But for a blood test predicting future brain decline? That's a meaningful signal.
They also used a two-cutoff strategy that achieved over 90% specificity and positive predictive value in people with subjective cognitive decline (the "I feel like my memory is slipping" stage). High-confidence calls only, which is exactly how you'd want a screening tool to work.
The model identified 75 discriminative proteins that help sort between conditions. Some of these overlap with known biology (NEFL for neurodegeneration, GDF15 for ALS). Others are newer targets.
The standout: 7 proteins linked to brain resilience (GLO1, TGFB1, VAT1, STX1A, PDE11A, IGF2, OMG). These were associated with healthier brain aging. They also mapped 52 existing neurodegenerative drugs to 12 of the discriminative proteins, opening the door for drug repurposing research.
Why It Matters
If you've watched a parent or grandparent go through a dementia workup, you know the drill. Months of waiting. Referral to referral. A neuropsych battery that takes half a day. Maybe a PET scan that costs thousands and still doesn't give a clean answer. Some families wait 2-5 years from first symptom to accurate diagnosis.
A blood test that can narrow the field from "something is wrong with the brain" to "here are the most likely conditions, ranked by probability" would change that timeline. Not replace the full workup. But front-load the information so families and clinicians aren't guessing in the dark for years.
For neurodivergent families, there's a deeper connection here. Many of us know what a diagnostic odyssey feels like. The years of "maybe it's anxiety, maybe it's depression, maybe it's ADHD, maybe it's autism." The concept of a multi-condition screening tool from a single sample is the same idea applied to neurodegeneration. Pattern detection from biological data. That's what this field needs more of.
And the resilience proteins are the real sleeper finding. Most dementia research focuses on what goes wrong. These 7 proteins point toward what keeps brains healthy. That's where future interventions could get interesting.
The Fine Print
The model was trained on clinical diagnoses, not biomarker-confirmed diagnoses. Many participants were labeled "Alzheimer's" or "Parkinson's" based on a doctor's assessment, without autopsy, amyloid PET, or CSF confirmation.
This matters because clinical misdiagnosis rates for dementia subtypes run 12-25% even in specialized centers, based on autopsy confirmation studies. You're training an AI to match labels that are themselves inaccurate. The model can only be as good as the data it learned from, and some of that data is wrong.
The authors know this. They call it out. But it means the accuracy numbers are inflated by an unknown amount.
Two of the top discriminative proteins raise red flags. ACHE (acetylcholinesterase) is a top feature for Alzheimer's classification, but it's also the direct target of cholinesterase inhibitors, the most common AD medication. KCNIP3 helps classify ALS, but riluzole (the standard ALS drug) affects this pathway.
If the model is partly detecting who's on medication rather than who has the disease, that's a problem. Treated patients look different from untreated patients in their protein profiles. An undiagnosed person (the exact use case for a screening tool) wouldn't have that medication signal.
The model was trained on data from 22 sites, but performance varied significantly between sites. Batch effects (differences in how samples were collected, stored, and processed) are a known challenge in proteomics.
For a test to work in the real world, it needs to perform consistently whether the blood was drawn in Houston or Helsinki. The authors acknowledge this gap. Until cross-site generalization improves, deployment outside research settings is premature.
All data was generated on the SomaLogic SomaScan platform. Independent research shows only 43% of SomaScan assays have genetic evidence confirming they actually measure the intended protein (vs. 72% for the competing Olink platform).
That doesn't mean the model is wrong. The AI might be picking up real disease signals even if some individual protein measurements are off-target. But it does mean these results may not replicate on different proteomics platforms, which limits how broadly the technology can be adopted.
BioFINDER-2 is a well-known cohort and a legitimate validation set. But it's based at Lund University, which is also the home institution of several senior authors on this paper. That's not fraud. It's normal in academic research. But a truly independent validation from a group with no connection to the authors would carry more weight.
What to Do With This
You can't order this test yet. ProtAIDe-Dx is a research tool. There's no timeline for clinical availability. Don't let anyone sell you a "multi-disease blood panel" based on this paper.
But blood-based brain tests ARE arriving. The FDA cleared the first Alzheimer's blood test for primary care in 2025 (Roche Elecsys pTau181). The Alzheimer's Association published clinical practice guidelines for blood biomarkers that same year. If your parent or grandparent is showing cognitive changes, ask their neurologist about pTau181 or pTau217 blood tests. Those are real, available, and covered by some insurers.
Track what you're seeing at home. Cognitive changes, personality shifts, motor symptoms, sleep disruptions. The data you collect matters when you sit down with a specialist. Tools like Brainloot can help you track cognitive and behavioral changes over time, giving your care team data to act on.
Don't panic about dementia risk. The resilience proteins in this study are a reminder that brain health isn't purely genetic destiny. Sleep, exercise, social engagement, and managing cardiovascular risk factors all have strong evidence behind them. Start there.
Watch this space closely. Multi-disease blood-based diagnostics are coming. The field has moved from "interesting idea" to "17,000-person validation study in Nature Medicine" in about three years. Clinical-grade versions are on the horizon, though no one can put a firm timeline on them.
Be cautious about overpromising. Patients will bring this paper to you. The honest answer: blood proteomics shows real promise for multi-disease screening, but it's not ready for standalone clinical use. Site variability and medication confounding need to be solved first.
Consider the two-cutoff approach. The paper's strategy of high-confidence-only calls (>90% specificity and PPV) is a model for how screening tools should work. Flag probable cases for full workup. Let uncertain cases get more data. Don't force binary calls from probabilistic models.
Independent replication on non-SomaLogic platforms is the next critical step. Can these results hold up on Olink or mass spectrometry? That answer determines whether this is a generalizable finding or a platform-specific artifact.
The medication confounding problem needs a clean experiment. Train the model only on treatment-naive patients and see if ACHE and KCNIP3 still rank as top features. If they drop, the current accuracy estimates need revision.
The 7 resilience proteins are underexplored. GLO1, TGFB1, VAT1, STX1A, PDE11A, IGF2, and OMG are all associated with healthier brain aging. That's a paper waiting to happen. What modulates their expression? Can they be targeted therapeutically?