Makers of AI models for use in healthcare should think through the potential actions of any “humans in the loop” of their tool’s implementation in real-world clinical settings.
This means AI designers ought to check the interpretability of the product’s outputs, anticipating “the performance of the Human-AI team rather than just the performance of the model in isolation.”
That’s one of 10 tenets FDA presents in new guidelines defining “good machine learning practice” for medical device development.
The agency drew up the 2-page document jointly with Health Canada and the U.K.’s Medicines and Healthcare Regulatory Agency.
Comments