By: Dan Robitzski
January 29, 2019
Sensitivity Training
In recent years, artificial intelligence has struggled with a major PR problem: whether or not it’s intentional, developers keep programming biases into their systems, creating algorithms that reflect the same prejudiced perspectives common in society.
That’s why it’s intriguing that engineers from MIT and Harvard University say they’ve developed an algorithm that can scrub the bias from AI — like sensitivity training for algorithms.
Machines Teaching Machines
The tool audits algorithms for biases and helps re-train them to behave more equitably, according to new research presented this week at the Conference on Artificial Intelligence, Ethics and Society.