Eye-tracking, and ODDS-Ratios

Interpretability refers to the ability to explain the mechanisms behind an AI's decision-making in order for clinicians to build trust in such AI systems.  

We compared the eye movements of clinical experts on medical images during diagnoses with the concepts/image regions used by deep learning systems to make classifications.  When there is alignment between AI concepts and expert eye movements, there is enhanced AI interpretability and trust between human experts and AI systems.  When there is disagreement, there is potential for the AI to inform the expert of novel features useful for clinical diagnosis.  

Beyond this, we are training AI systems with expert eye movements to constrain and inform such systems to enhance their efficiency, accuracy, and interpretability (see below section on "Detecting Eye Disease Informed by Ophthalmology Resident Gaze Data".

We visualized the regions of importance in medical images that are used by AI for making classifications.  Such visualization techniques include Gradient-Weighted Class Activation Maps (Grad-CAMs); an example Grad-CAM heatmap of a full OCT report is shown at right in the above image (red/yellow colors highlight regions most important for classification, while blue/violet colors are least important). At left is an OCT report overlaid with clinician eye movements, shown by the translucent blue patches.

(2) Using Fisher's Exact test and 3D CNNs optimized for multi-modal input (OCT and OCTA), we classified presence or absence of 5 key features associated with late stages of Age-Related Macular Degeneration (AMD).  We then ranked the strength of association of these 5 clinical features with the presence of late stages of AMD (non-neovascular, or 'dry', AMD or neovascular, or 'wet', AMD).  We found alignment across all 5 features for experts and AI when evaluating the strength of association of the features with occurence of dry AMD.  However, for wet AMD occurence, experts and AI agreed only on the strength of association of CNV (Choroidal Neovacularization).  The disagreement between AI and experts for the remaining features suggests potential to find new AMD features of importance as well as future studies focused on AI-based segmentation/localization of features beyond global detection of feature presence/absence.

Detecting Eye Disease Informed by 

Ophthalmology Resident Gaze Data

Fixation-Order-Informed ViT (Vision Transformer) and Ophthalmologist Gaze-Augmented ViT show greater accuracy, computational efficiency, and interpretability than ViT for detection of glaucoma


Read more here:

Kaushal, S., Sun, Y., Zukerman, R., Chen, R.W. and Thakoor, K.A., Detecting Eye Disease Using Vision Transformers Informed by Ophthalmology Resident Gaze Data. The 45th Annual International Conference of the IEEE Engineering in Medicine and Biology Society.

Empowering AI-Based Glaucoma Detection with Vision Transformers, Self-Supervised Learning, and Expert Gaze Patterns

Collecting a large amount of labeled medical data is difficult and costly, especially for specialties like ophthalmology. It is also common to have a large dataset with only a few labels.

Thus, we trained a vision transformer to detect glaucoma using a combination of optical coherence tomography (OCT) probability maps, which outperforms CNN approaches in terms of specificity. The model provided interpretability through attention map visualization and Latent Dirichlet Allocation (LDA). We implemented self-supervised training to utilize clinician eye-tracking for glaucoma detection with fewer explicit labels.

Empowering AI-Based Glaucoma Detection: Vision Transformers, Self-Supervised Learning, and Expert Gaze Patterns