Connect ChatGPT & Claude to personalized clinical-trial evidence
via MCP
Why this matters: AI medical answers often summarize clinical-trial findings — and those summaries can carry a precision risk and a recall risk.
Precision risk: AI cited findings that don't apply to your personal issues.
Recall risk: AI missed findings that apply to your personal issues.
Learn more →
Example scenarios
Personalized Precision
“Did the cited evidence’s enrolled population actually apply to this user?”
Example Scenarios:
A statin clinical trial shows LDL reduction — but excluded pregnant women; AI recommends it to a 32-year-old trying to conceive.
A sleep-aid clinical trial reports benefit in adults 18–65 — but excluded patients over 80; AI applies the result to a 92-year-old at high fall risk.
MCP Tool
- Refine on P Population — narrow toward findings whose enrolled population matches the user
- Applicability scoring — structured fit between user state and each clinical trial’s eligibility criteria
- FHIR ingestion — Conditions, Medications, Observations resolved to canonical identifiers
Personalized Recall
“Did we surface every relevant piece of evidence for this user?”
Example Scenarios:
A query for “kava + anxiety” returns benefit clinical trials, but misses adverse-event reports of panic episodes in patients with comorbid ADHD.
A user asks about CBT for depression; a narrow query misses CBT clinical trials for PTSD, anxiety, and chronic pain — adjacent conditions where the same intervention is also tested.
MCP Tool
- Expand on I Intervention · C Comparator · O Outcome — via mechanism hubs, condition hubs, and causal links
- Expand to nearby SNOMED CT concepts — sibling and parent conditions where adjacent evidence lives
- Facts extracted from NIH-funded manuscripts, which are not fully available to ChatGPT because of copyright restrictions