Security, Privacy and Robustness of Machine Learning and Machine Learning-Based Systems
As deep learning, in particular deep neural networks (DNNs), applications proliferate, there is a serious need to look at the security, privacy, and robustness of both machine learning-based protocols as well as machine learning itself. Already, there are numerous demonstrations of how vulnerable trained DNNs can be to adversarial inputs. In critical systems, e.g. self-driving cars, such vulnerabilities can be catastrophic.
Understanding these can lead to ML algorithms and various application protocols based on ML that are more secure, robust, and privacy-preserving. Explainable machine learning and AI (XML/XAI) can play an important role in helping understand potential vulnerabilities and resiliency issues of machine learning algorithms.