Author: Kyle Wiggers
Publisher: TechCrunch+
Publication Year: 2022
Summary: The following article covers how federated learning allows machine learning (ML) models to be trained on sensitive data without having it housed on multiple different servers and/or machines. Healthcare data (especially as pandemic data was coming in quickly), financial data, and user device logs currently use this technique. However, the state of marketing this technology is disjointed as multiple companies have a branch that implements it. DynamoFL is a very small (only 4 employees currently) startup that leverages both performance and privacy. It is uniquely suited to avoid “member inference” attacks, whereby bad actors identify a person’s sensitive attributes based on extracted information. Related to the previous article by Keyes and Flaxman, differential privacy (the metric employed by the Census Bureau) has a clear “privacy vs. performance tradeoff.” DynamoFL does not have such a trade-off due to the novel nature of their algorithm, which incites teams implementing ML models to use it, as there is little drawback.