Model Inversion Attacks: When AI Reveal Their Secrets
Researchers in 2019 proved something that sent shock waves throughout the machine learning community. With nothing more than the facial recognition API’s confidence scores, they reconstructed clear images of people whose photos had been used to train the learning model. The re-creations were not exact replicas, but they came close enough that real people whose likenesses had never been consented to could be identified.