Industrial Applications & Normatives 3 nov. 2021

Anonymization for Data Mining

Benjamin NGUYEN, (INSA Centre Val de Loire & Laboratoire d'Informatique Fondamentale d'Orléans)

In the context of the GDPR (General Data Protection Regulation), it is highly preferable to run data mining algorithms on anonymous data, rather than on personal microdata, which requires much stronger security guarantees. In this presentation, we will present several anonymization models that can be applied to relational or tabular data (k-anonymity, l-diversity, differential privacy). We will also show how data anonymization impacts the quality of subsequent data analysis tasks using classification or clustering algorithms (C4.5, k-means, multi-layer perceptron...). A short demonstration using WEKA for data analysis and ARX for data anonymization will also be proposed.

Security & Safety Issues in the Operation of autonomous passenger vehicles in public roads

Javier Ibanez-Guzman (Renault S.A. Research Division, France)

The security and safety of autonomous vehicles when operating in public roads is a major concern. They can be subject to internal faults and external disturbances resulting in dangerous situations. Machine learning methods are today an integral part of these systems. Despite notable progress, systems can be spoofed, they can be kidnaped, be subject to the malicious behavior of other road users, etc. The presentation addresses these issues from a functional perspective of an autonomous vehicle, to identify which are the components are the most vulnerable and the implications. It includes an outline on the potential Security Policy that needs to be elaborated when they are deployed.

 

ML for Cybersecurity in converged energy systems: a saviour or a villain?

Angelos Marnerides (Glasgow University)

In today’s networked systems, ML-based approaches are regarded as core functional blocks for a plethora of applications ranging from network intrusion detection and unmanned aerial vehicles up to medical applications and smart energy systems. Nonetheless, regardless of the capabilities demonstrated through such schemes it has been recently shown that they are also prone to attacks targeting their intrinsic algorithmic properties. Therefore, attackers are nowadays capable at instrumenting adversarial ML processes mainly by injecting noisy or malicious training data samples in order to undermine the process of a given ML algorithm. This talk aims to discuss and describe this relatively new problem and further demonstrate examples targeting Virtual Power Plant (VPP) applications.

 

Standardization initiatives

Lecture to be confirmed: Riccardo Mariani (NVIDIA, USA)

The talk discusses standardization initiatives related to the introduction and application of ML/AI in critical systems (e.g., ISO PAS 21448:2019 Safety of the intended functionality).

 

Legally Responsible ML and AI for Critical Systems

Jeanne Mifsud Bonnici, Tentative (University of Groningen, The Netherlands)

The phrase 'Responsible AI' has been gaining momentum in recent discussions on the development of machine learning and AI. Different principles are being put forward by industry and academia as key principles of 'responsible AI', such as explainability, reproducibility of operations, accuracy, data risks etc. Intergovernmental initiatives and at a national level, the debate is increasing shifting from ethical responsibility to hard legal responsibilities. This talk discusses what are the legal implications (and legal responsibility) of ML and AI for critical systems.

Online user: 2 Privacy
Loading...