Personalized Clinical-Trial Evidence for Your AI We give your AI a deterministic MCP tool for personalized knowledge-graph retrieval.

Why this matters: clinical-trial findings have population details on participant eligibility and outcomes — age, comorbidities, drug interactions — that surface-level AI summaries strip out. Medical AI loses both precision (cited evidence that doesn't apply) and recall (missed evidence that does) unless the user asks the perfectly personalized question on those population details, for example:

Personalized Precision
“Did the cited evidence’s enrolled population actually apply to this user?”
Examples:
A statin clinical trial shows LDL reduction — but excluded pregnant women; AI recommends it to a 32-year-old trying to conceive.
A sleep-aid clinical trial reports benefit in adults 18–65 — but excluded patients over 80; AI applies the result to a 92-year-old at high fall risk.
MCP Tool
  • Refine on P Population — narrow toward findings whose enrolled population matches the user
  • Applicability scoring — structured fit between user state and each clinical trial’s eligibility criteria
  • FHIR ingestion — Conditions, Medications, Observations resolved to canonical identifiers
Personalized Recall
“Did we surface every relevant piece of evidence for this user?”
Examples:
A query for “kava + anxiety” returns benefit clinical trials, but misses adverse-event reports of panic episodes in patients with comorbid ADHD.
A user asks about CBT for depression; a narrow query misses CBT clinical trials for PTSD, anxiety, and chronic pain — adjacent conditions where the same intervention is also tested.
MCP Tool
  • Expand on I Intervention · C Comparator · O Outcome — via mechanism hubs, condition hubs, and causal links
  • Expand to nearby SNOMED CT concepts — sibling and parent conditions where adjacent evidence lives
  • Facts extracted from NIH-funded manuscripts, which are not fully available to ChatGPT because of copyright restrictions