ML Models recognize Images that are nonsense. Is that a Problem?
Overinterpretation - a cause of concern
for neural networks. Models trained on CIFAR-10, for example, made confident predictions even when 95 percent of input images were missing, and the remainder is senseless to humans.