Previous month Previous day Next day Next month
By Year By Month By Week Today Search Jump to month

Explainable AI & Social/ Ethical Aspects

Download as iCal file
 
Thursday, 06. June 2019, 13:15 - 14:45
Category: Lectures & Presentations | created by This email address is being protected from spambots. You need JavaScript enabled to view it.

Adversarial Machine Learning - An Introduction to Backdoor, Evasion and Inversion Attacks by Rudolf Mayer, Senior Researcher, SBA Research & Lector, TU Wien

As Machine Learning is increasingly integrated in many applications, including safety critical ones such as autonomous cars, robotics, visual authentication and voice control, wrong predictions can have a significant influence on individuals and groups. Advances in prediction accuracy have been impressive, and while machine learning systems still can make rather unexpected mistakes on relatively easy examples, the robustness of algorithms has also steadily increased.

However, many models, and specifically Deep Learning approaches and image analysis, are rather susceptible to adversarial attacks. These attacks are e.g. in the form of small perturbations that remain (almost) imperceptible to human vision, but can cause a neural network classifier to completely change its prediction about an image, with the model reporting a very high confidence on the wrong prediction. A strong form of attack are so-called backdoors, where a specific key is embedded into a data sample, to trigger a pre-defined class prediction in a controlled manner.

This talk will give an overview on various attacks (backdoors, evasion, inversion), and will discuss how they can be mitigated.

Location SBA Research, 1040 Vienna
Contact Julia Pammer