Foundations and challenges - 2 nov.

ML/AI Foundations: from Pattern Recognition to Deep Learning and Explainable AI

Massih-Reza Amini (Université Grenoble Alpes, LIG Lab.)

The first set of lectures aim to provide foundations of principles and state-of-the-art approaches of machine learning and artificial intelligence. The talk has a broad range, from basics until the emerging topic of explainable AI that faces the issues of the interpretability problem of AI and machine learning.

 

On the safety and dependability implications of AI/ML

Andrea Bondavalli (University of Florence, Italy)

This lecture explains the issue of safety and dependability implications of AI/ML. Safety and dependability of AI/ML is somehow transversal to security issues, but still fundamental for an appropriate use of AI/ML technologies. A clear example is the dependability and safety of decision systems (e.g., autonomous driving), where erroneous decision may have detrimental consequences even when not correlated to security issues.

Machine learning for intrusion detection systems: design and evaluation

Tommaso Zoppi (University of Florence, Italy)

Intrusion detection systems rely on machine learning to detect anomalies i.e., deviations from the expected behaviour. The lecture i) reviews the basis of anomaly detection and intrusion detection systems, ii) presents state-of-the-art on anomaly detection algorithms for cyber-security and on attacks datasets, iii) discusses how to evaluate and compare algorithms with respect to target attack models and datasets. Further, the lecture presents a live tutorial with the tool RELOAD (Rapid EvaLuation Of Anomaly Detectors), that will allow to put into practice all the presented notions. The lecture will show how to exploit RELOAD to evaluate intrusion detection algorithms, from the identification of the target dataset and metrics until its execution and the comparative analysis of results

 

Hacking AI: Towards Algorithms that Humans Can Trust

Battista Biggio (University of Cagliari, Italy)

Data-driven AI and machine-learning technologies have become pervasive, and even able to outperform humans on specific tasks. However, it has been shown that they suffer from hallucinations known as adversarial examples, i.e., imperceptible, adversarial perturbations to images, text and audio that fool these systems into perceiving things that are not there. This has severely questioned their suitability for mission-critical applications, including self-driving cars and autonomous vehicles. This phenomenon is even more evident in the context of cybersecurity domains with a clearer adversarial nature, like malware and spam detection, in which data is purposely manipulated by cybercriminals to undermine the outcome of automatic analyses.
As current data-driven AI and machine-learning methods have not been designed to deal with the intrinsic, adversarial nature of these problems, they exhibit specific vulnerabilities that attackers can exploit either to mislead learning or to evade detection. In this talk, previous work in the research field of adversarial machine learning is reviewed, along with the design of more secure and explainable learning algorithms, in the context of real-world applications, including computer vision, biometric identity recognition and computer security.

Tell Me How You Move I will Tell You Who You Are: Latest Advances in Location Privacy

Sonia Ben Mokhtar, LIRIS – CNRS

The widespread adoption of continuously connected smartphones and tablets drove the adoption of mobile applications, among which many use location to provide a geolocated service. The usefulness of these services is no more to be demonstrated; getting directions to work in the morning, leaving a check-in at a restaurant at noon and checking next day’s weather in the evening is possible right from any mobile device embedding a GPS chip. In these applications, locations are sent to a server often hosted on untrusted cloud platforms, which uses them to provide personalized answers. However, nothing prevents these platforms from gathering, analyzing and possibly sharing the collected information. This opens the door for many threats, as location information allows to infer sensitive information about users, among which one’s home, work place or even religious/political preferences. For this reason, many schemes have been proposed these last years to enhance location privacy while still allowing people to enjoy geolocated services. During this presentation, I will present the latest advances in location privacy protection mechanisms and give some insights on open challenges and under-explored questions.

Online user: 2 Privacy
Loading...