ZIRI

Bias Detection Tool for Researchers

Although AI/ML algorithms offer promise for clinical decision making, that potential has yet to be fully realized in healthcare. Even well-designed AI/ML algorithms and models can become inaccurate or unreliable over time due to a variety of factors such as changes in data distribution, user behavior, or shifts in data capture among others. The NIH’s NCATS challenge calls for solutions to help detect and eliminate latent predictive and social bias in these AI/ML models. Our proposal is a novel tool designed for AI/ML researchers to test their datasets for inherent social biases. Additionally, our tool will then run their dataset through a wide variety of well known ML models to compute algorithmic level predictive biases and display them to the user in an easy to read and navigate GUI. These metrics will include at the data level Class Imbalance, Demographic Disparity, and Jensen Shannon Divergence. At the algorithmic level the tool will compute the statistical parity difference, disparate impact, equal opportunity difference, average odds difference, and Theil index. This solution is lightweight and easy to use for AI/ML researchers will any level of previous programming experience. Furthermore, this tool is easily portable to a wide range of different fields.

Demonstration Video

Submission Report

NIH_Bias_Detection.pdf