The last decade has witnessed a massive deployment of Machine Learning tools in everyday life automated tasks. Neural Networks are nowadays in use in a growing number of application areas because of their excellent performances. Unfortunately, it has been shown by many researchers that they can be attacked and fooled in several different ways, and this can dangerously impair their ability to correctly perform their tasks. In this paper we describe a watermarking algorithm that can protect and verify the integrity of (Deep) Neural Networks when deployed in safety critical systems, such as autonomous driving systems or monitoring and surveillance systems.
NeuNAC: A novel fragile watermarking algorithm for integrity protection of neural networks
Botta M.
;Cavagnino D.;Esposito R.
2021-01-01
Abstract
The last decade has witnessed a massive deployment of Machine Learning tools in everyday life automated tasks. Neural Networks are nowadays in use in a growing number of application areas because of their excellent performances. Unfortunately, it has been shown by many researchers that they can be attacked and fooled in several different ways, and this can dangerously impair their ability to correctly perform their tasks. In this paper we describe a watermarking algorithm that can protect and verify the integrity of (Deep) Neural Networks when deployed in safety critical systems, such as autonomous driving systems or monitoring and surveillance systems.File | Dimensione | Formato | |
---|---|---|---|
1-s2.0-S0020025521006642-main.pdf
Accesso aperto
Descrizione: Articolo completo
Tipo di file:
PDF EDITORIALE
Dimensione
929.25 kB
Formato
Adobe PDF
|
929.25 kB | Adobe PDF | Visualizza/Apri |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.