In recent years, the possibility of easily editing video sequences led to the diffusion of user generated video compilations obtained by splicing together in time different video shots. In order to perform forensic analysis on this kind of videos, it can be useful to split the whole sequence into the set of originating shots. As video shots are seldom obtained with a single device, a possible way to identify each video shot is to exploit sensor-based traces. State-of-the-art solutions for sensor attribution rely on Photo Response Non Uniformity (PRNU). Despite this approach has proved robust and efficient for images, exploiting PRNU in the video domain is still challenging. In this paper, we tackle the problem of blind video temporal splicing detection leveraging PRNU-based source attribution. Specifically, we consider videos composed by few-second shots coming from various sources that have been temporally combined. The focus is on blind detection and temporal localization of splicing points. The analysis is carried out on a recently released dataset composed by videos acquired with mobile devices. The method is validated on both non-stabilized and stabilized videos, thus showing the difficulty of working in the latter scenario.

Blind detection and localization of video temporal splicing exploiting sensor-based footprints / Mandelli, Sara; Bestagini, Paolo; Tubaro, Stefano; Cozzolino, Davide; Verdoliva, Luisa. - (2018), pp. 1362-1366. (Intervento presentato al convegno European Signal Processing Conference tenutosi a Roma nel Settembre) [10.23919/EUSIPCO.2018.8553511].

Blind detection and localization of video temporal splicing exploiting sensor-based footprints

Davide Cozzolino;Luisa Verdoliva
2018

Abstract

In recent years, the possibility of easily editing video sequences led to the diffusion of user generated video compilations obtained by splicing together in time different video shots. In order to perform forensic analysis on this kind of videos, it can be useful to split the whole sequence into the set of originating shots. As video shots are seldom obtained with a single device, a possible way to identify each video shot is to exploit sensor-based traces. State-of-the-art solutions for sensor attribution rely on Photo Response Non Uniformity (PRNU). Despite this approach has proved robust and efficient for images, exploiting PRNU in the video domain is still challenging. In this paper, we tackle the problem of blind video temporal splicing detection leveraging PRNU-based source attribution. Specifically, we consider videos composed by few-second shots coming from various sources that have been temporally combined. The focus is on blind detection and temporal localization of splicing points. The analysis is carried out on a recently released dataset composed by videos acquired with mobile devices. The method is validated on both non-stabilized and stabilized videos, thus showing the difficulty of working in the latter scenario.
2018
978-908279701-5
Blind detection and localization of video temporal splicing exploiting sensor-based footprints / Mandelli, Sara; Bestagini, Paolo; Tubaro, Stefano; Cozzolino, Davide; Verdoliva, Luisa. - (2018), pp. 1362-1366. (Intervento presentato al convegno European Signal Processing Conference tenutosi a Roma nel Settembre) [10.23919/EUSIPCO.2018.8553511].
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11588/740922
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 13
  • ???jsp.display-item.citation.isi??? 5
social impact