Broad Goals
- Devising deep-learning virtual readers
- Devising truth-affirmed databases for algorithmic validation
- Developing clinical diagnostic tools
- Assessing how best Data-informed and mechanistic models can be integrated
- Assessing the uncertainly, reliability, explainability, and generalizability of AI-based analytics
Artificial intelligence (AI) technologies continue to revolutionize radiology and present exciting new ways to improve patient care. AI innovations permeate many areas of research in our labs. We currently use deep learning frameworks to segment patient CT image data into as many as 140 organs and structures. This work paves the way toward the next generation of our famous XCAT anatomical models, and also facilitates clinical research such as defining normative trends in organ volumes and identifying hidden diseases through opportunistic screening. Leveraging the rich data in radiologist reports, natural language processing algorithms can annotate many abnormalities at the scale of an entire health system. With hundreds of thousands of patient scans and the corresponding annotations, new AI algorithms can be developed for applications such as disease detection/classification, radiologist workflow triage, image harmonization, and image synthesis. These tools allow the annotation and sharing of valuable clinical resources such as the RAD-ChestCT and Duke Lung Cancer Screening datasets, each containing thousands of patient CT scans to foster new AI research. Using the unique imaging data from the virtual imaging trials team, we aim to provide objective and quantitative measurements of performance with voxel-level ground truth, towards reliable, explainable, and generalizable AI-based analytics.