ML Models recognize Images that are nonsense. Is that a Problem?

Overinterpretation - a cause of concern for neural networks. Models trained on CIFAR-10, for example, made confident predictions even when 95 percent of input images were missing, and the remainder is senseless to humans.

ML Models recognize Images that are nonsense. Is that a Problem?
Photo by David von Diemar / Unsplash

MIT scientists have recently identified a cause of concern for neural networks, and that is “overinterpretation,” where algorithms make confident predictions based on details that don’t make sense to humans, like random patterns or image borders. This could be a potential problem for self-driving cars and medical diagnostics which are in high-stakes environments.

"While it may seem that the model is the likely culprit here, the datasets are more likely to blame."

This may mean creating datasets in more controlled environments. Currently, it’s just pictures that are extracted from public domains that are then classified. But if you want to do object identification, for example, it might be necessary to train models with objects with an uninformative background.

Reference: Nonsense can make sense to machine-learning models.

For more content around the Data World delivered to your mailbox, consider subscribing.

✔️
Check us out on Twitter - Follow to stay updated.