SmartAtlas
Visual search engine for medical imaging

Visual search for radiology. Powered by your own archive.

Diagnose with the entire hospital's experience at your fingertips. Decision support that augments radiologist expertise — never replaces it.

Illustrative composition. CT images sourced from Wikimedia Commons — cystic renal cell carcinoma case by Hellerhoff (CC-BY-SA); thumbnails by Filip em (PD), Guite/Hinshaw/Lee (CC-BY 3.0), and Hellerhoff (CC-BY-SA). Smart Atlas does not display real patient data on this site.

17 min
avg case lookup, today
0
data leaves the hospital
CT · X-ray · MRI
three modalities supported
The problem

Vast archives. Zero visual search.

Hospitals accumulate decades of confirmed cases — biopsied, operated on, followed for years. The institutional knowledge is already there. None of it is reachable when a radiologist needs it most: staring at the ambiguous study in front of them, on deadline.

The problem isn't a lack of data. It's that the hospital's data has no visualindex. PACS can find every CT abdomen on a 64-year-old female from the past five years — it cannot find the four that look like the one on the screen right now.

Smart Atlas is the visual index radiology has been missing.

17 minaverage lookup per case

Per published surveys, radiologists spend an average of 17 minutes per case manually searching textbooks, Radiopaedia, and personal memory for analogous prior studies — time taken from reading volume and from rest.

~70%diagnostic accuracy ceiling

Anders, “Development of Professional Accuracy” (Cambridge, 2009): radiologists “learn very slowly, gradually approaching, but rarely exceeding 70% diagnostic accuracy.” Volume alone doesn't close the gap.

Not in PACSsearch by image content

Existing PACS systems index by patient demographics, accession number, and study metadata. None of them index by what the image actually looks like — the one query a radiologist most needs.

Effectively zerofeedback to the reporter

Surgical findings, biopsy results, and follow-up imaging seldom make it back to the radiologist who wrote the original report. The professional learning loop is broken at the institution level.

Visual similarity search

See how the search works.

The mechanism is legible by design. Every step is something a clinician or hospital IT lead can audit, reproduce, and trust. That's the difference between a tool that gets used and a tool that gets installed and ignored.

  1. 01

    Clinician selects the current study

    Within the existing reporting workflow, the radiologist marks the slice or series they want to query. No upload, no export — Smart Atlas reads from the same DICOM source that PACS already serves.

  2. 02

    A medical image encoder converts the study to an embedding

    A specialized encoder — the kind benchmarked in our RAD-SRAC paper — projects the study into a 768-dimensional space where visually similar findings cluster naturally, regardless of patient or accession.

  3. 03

    Nearest-neighbor retrieval against the hospital index

    Smart Atlas queries a vector index built over the hospital's own historical archive. Sub-second retrieval. The index is updated continuously as new studies are reported.

  4. 04

    Similar cases returned with their confirmed outcomes

    Each retrieved study comes back with its full report, the surgical/biopsy/follow-up outcome from the EHR, and the diagnostic keywords highlighted. The radiologist sees evidence — and decides.

Visualization. t-SNE projection of 1028 studies from a representative radiology archive, reduced to 2D for display. Cluster structure echoes the embedding geometry described in our RAD-SRAC paper.

In the reading room

The radiologist always has the final word.

Smart Atlas sits beside the report, not inside it. It surfaces evidence; it never writes the diagnosis. The interface is built for radiologists who already know how to read a scan — and just want the hospital's own confirmed history at hand.

  • ·Outcomes confirmed by biopsy, surgery, or follow-up imaging
  • ·Highlighted diagnostic keywords from the matched report
  • ·One click opens the full historical case in PACS
PACS · Radiology Workstation
SmartAtlas v1.0
Report draftCase 28-491 · CT abdomen

Findings. 6 cm solid enhancing mass in the lower pole of the left kidney with a central stellate area of low attenuation. Separate simple cyst abutting the mass as well as a further simple cortical cyst in the left mid pole.

Impression. Imaging characteristics suggestive of an oncocytoma |cursor

auto-saved 12:42:08dictation: live
SmartAtlas · 3 confirmed casesfrom local archive
Report #714196%
Confirmed: Oncocytoma

7 cm left renal mass — biopsy under US confirmed oncocytoma. Stable at 24 mo.

Report #622892%
Confirmed: RCC, clear cell

Solid enhancing mass with central scar mimic. Histopathology revealed clear cell RCC.

Report #590388%
Surveillance: stable

Indeterminate renal lesion. Serial CT over 36 mo. — no interval growth.

Note. Smart Atlas does not generate impressions or flag pathologies. Retrieved cases are evidence — the radiologist decides.
Capabilities

Four pillars of smarter diagnostics.

Smart Atlas isn't one feature stretched across four marketing tiles. It's a single retrieval substrate — visual similarity over your hospital's archive — with four distinct workflows built on top of it.

01

Visual similarity search

Query your hospital's archive by the image itself — not by patient name, study date, or report keyword. The encoder turns each study into a high-dimensional embedding; nearest-neighbor retrieval surfaces visually analogous prior cases in milliseconds, each linked to its confirmed clinical outcome from the EHR.

In practice

You're staring at an indeterminate 4 cm renal mass with central scar. Smart Atlas returns 12 visually similar prior cases from the last 5 years — 7 oncocytomas, 3 chromophobe RCCs, 2 clear cell RCCs — each with its surgical pathology already linked.

Modalities: CT · X-ray · MRI · sub-second retrieval

02

Education & feedback

Trainees rarely know what subtle finding they should be searching for. Experienced radiologists rarely learn whether the study they reported six months ago was confirmed, refuted, or revised by the surgeon. Smart Atlas closes that loop by surfacing similar cases together with their downstream outcomes — biopsy, surgery, follow-up imaging.

In practice

A second-year resident pulls 50 confirmed cases of granulomatous lung disease, sorted by similarity to the chest CT they're reading right now. They study the cases that fooled their predecessors before submitting the report.

Continuous learning across every level of seniority

03

Batch analysis & workflow

Context-switching is the silent tax on radiology accuracy. Stack visually similar studies into a single focused review session: incidental adrenal lesions, indeterminate renal cysts, lung nodules requiring Lung-RADS categorization. The clinician's eye stays calibrated to one finding at a time.

In practice

Your morning worklist has 14 chest CTs. Smart Atlas clusters the 6 with sub-centimeter lung nodules together so they're read consecutively under matched windowing — instead of interleaved with abdomens, MSKs, and brains.

Higher accuracy, faster reporting, less mental fatigue

04

Guidelines & literature

The right reference at the right moment. Smart Atlas matches the study's visual signature to the appropriate diagnostic criteria, institutional protocols, and recent peer-reviewed papers — surfacing them inline, not buried three tabs deep in a guideline portal.

In practice

An MS-suspect MRI surfaces the 2017 McDonald criteria, your hospital's MS imaging protocol, and the three most-cited recent papers on MS imaging mimics — without leaving the reporting workspace.

Bridges daily practice with current evidence

Foundations

Our research work.

github.com/TheLion-ai
01Open-source dataset · 2024

UMIE Dataset. Twenty radiology corpora, one shared ontology.

The largest published unification of open-source medical imaging data. UMIE merges twenty independently-released radiology corpora — KITS-23, CoronaHack, BrainMetShare, Chest X-ray 14, and a dozen more — into a single annotation schema aligned to the RadLex ontology used by the RSNA.

For anyone training a model that needs to generalize across CT, X-ray and MRI, UMIE is the first time the corpus you actually want exists in one place, under one schema, with reproducible preprocessing.

View on GitHub
882,774
medical images, fully annotated
20
open-source datasets unified
3
modalities · CT · X-ray · MRI
RadLex
ontology-aligned annotations
CC-BY-NC-SA
open license
Constituent collections (selected)
KITS-23CoronaHackCOCABrainMetShareBrain Tumor ClassificationLiver Tumor SegmentationAlzheimer's DatasetCOVID-19 X-RayIntracranial Hemorrhage MasksChest X-ray 14Brain MRIKnee OsteoarthritisLIDC-IDRICT-ORGCMMD Mammography+ 5 more
02Peer-reviewed paper · AAAI GenAI4Health 2025

RAD-SRAC. Why retrieving similar cases improves diagnostic accuracy.

A controlled study of visual retrieval in radiology classification, across three modalities and seven Vision Language Models. The question: does showing the classifier the nearest visually similar prior cases at inference time produce measurably better diagnoses than the model alone?

The answer is yes — reproducibly, across kidney tumor CTs (KITS-23), chest X-rays (CoronaHack), and brain tumor MRIs. Three to five retrieved reference cases is the sweet spot. The effect holds for frontier models and for smaller models you can deploy on-premise.

The same retrieval mechanism the paper benchmarks on a model is what runs underneath Smart Atlas in the reading room.

Read the paper
Retrieval-augmented classification with a domain-specific medical encoder substantially improves diagnostic accuracy across CT, X-ray, and MRI — without any model fine-tuning. Optimal performance is reached with just 3–5 retrieved reference images.
ExcerptKlaudel, B. & Obuchowski, A.
RAD-SRAC: Simple Retrieval Augmented Classification for Radiology
GenAI4Health Workshop @ AAAI 2025
Tested on:KITS-23 · CoronaHack · Brain Tumor Classification. Models:Claude 3.5 Sonnet · GPT-4o · Gemini 1.5 Pro · Qwen2-VL · Pixtral.
The augmentation principle

Decision support. Never replacement.

Smart Atlas does not generate diagnoses. It does not flag pathologies. It does not score studies as normal or abnormal. The radiologist's expertise is not the bottleneck we are trying to remove.

What we remove is the friction between a difficult case and the hospital's own confirmed history. The system retrieves evidence; the clinician interprets it. Every claim made in the report is made by a human, on the basis of training, judgement, and visible reasoning. That distinction is not a marketing position — it's the regulatory and clinical foundation the product is built on.

Augmenting radiologist expertise. Never replacing it.

Data privacy & deployment

100% on-premise. Zero data leaves the hospital.

Every operation — image analysis, similarity matching, case retrieval — runs on the hospital's own infrastructure. The vendor never has access to patient data, and patient data never leaves the institutional boundary.

Hospital infrastructure
PACSimage archiveSmart Atlasvector index · retrievalEHRreports · outcomes

data flows · PACS → Smart Atlas → EHR · read-only

Local processing

All embedding, indexing, and retrieval runs on hospital servers — air-gapped where required.

No cloud transfer

Patient images and reports never traverse external networks. Vendor systems require zero data access.

Full compliance

Architecture aligned with HIPAA, GDPR, and EU MDR expectations. Designed alongside hospital IT.

Pilot program

Bring Smart Atlas to your radiology department.

We pilot Smart Atlas with hospitals on the basis of a defined clinical question, against the institution's own archive, with full data residency. Tell us about your setup and we respond within two working days.