MIT Taxonomy Helps Build Explainability Into the Components of Machine-Learning Models

Researchers develop tools to help data scientists make the features used in machine-learning models more understandable for end users. Explanation methods that help users understand and trust machine-learning models often describe how much certain features used in the model contribute to its prediction. Credit: Christine Daniloff, MIT; stock image

Researchers develop tools to help data scientists make the features used in machine-learning models more understandable for end users.
Explanation methods that help users understand and trust machine-learning models often describe how much certain features used in the model contribute to its prediction. For example, if a model predicts a patient’ …
Read more…….