Differential Privacy for Privacy-Preserving Machine Learning 🔐
Tonight's explorations led me to the ✨gold✨ standard of mitigating the leakage of data in #ML -- #DifferentialPrivacy. The idea is to add very subtle statistical noise (in the dataset) to make it impossible to infer information about an individual data point.
To put it simply:
— Jigyasa Grover ✨ (@jigyasa_grover) July 11, 2022
Privacy-Preserving ML = Data Privacy + Model Privacy
A simple example is the collection and publishing of user online behavior, their demographics, etc. while ensuring the confidentiality of the actual responses like the US Census. The goal is to learn about the community without learning about individuals in the community ⚽️
Differential Privacy is a probabilistic concept. We can use the Laplace mechanism to add controlled noise to the function or use Randomized Responses that allow for concealed information thus introducing noise and protecting privacy 🔐 #PrivacyPreservingML
#DifferentialPrivacy is a soft balance between data accuracy and data privacy. However, with #MLEthics getting their due limelight, it is noteworthy that decreased accuracy & utility is a common issue among all statistical disclosure limitation methods and is not unique to DP.