Video tampering localisation using features learned from authentic content

Pamela Johnston, Eyad Elyan, Chrisina Jayne

Research output: Contribution to journalArticlepeer-review

1 Citation (Scopus)

Abstract

Video tampering detection remains an open problem in the field of digital media forensics. As video manipulation techniques advance, it becomes easier for tamperers to create convincing forgeries that can fool human eyes. Deep learning methods have already shown great promise in discovering effective features from data, particularly in the image domain; however, they are exceptionally data hungry. Labelled datasets of varied, state-of-the-art, tampered video which are large enough to facilitate machine learning do not exist and, moreover, may never exist while the field of digital video manipulation is advancing at such an unprecedented pace. Therefore, it is vital to develop techniques which can be trained on authentic or synthesised video but used to localise the patterns of manipulation within tampered videos. In this paper, we developed a framework for tampering detection which derives features from authentic content and utilises them to localise key frames and tampered regions in three publicly available tampered video datasets. We used convolutional neural networks to estimate quantisation parameter, deblock setting and intra/inter mode of pixel patches from an H.264/AVC sequence. Extensive evaluation suggests that these features can be used to aid localisation of tampered regions within video.

Original languageEnglish
Pages (from-to)12243-12257
Number of pages15
JournalNeural Computing and Applications
Volume32
Issue number16
DOIs
Publication statusPublished - 30 May 2020

Bibliographical note

Publisher Copyright:
© 2019, The Author(s).

Copyright:
Copyright 2020 Elsevier B.V., All rights reserved.

Fingerprint

Dive into the research topics of 'Video tampering localisation using features learned from authentic content'. Together they form a unique fingerprint.

Cite this