Abstract
Signatures have long been considered to be one of the most accepted and practical means of user verification, despite being vulnerable to skilled forgers. In contrast, EEG signals have more recently been shown to be more difficult to replicate, and to provide better biometric information in response to known a stimulus. In this paper, we propose combining these two biometric traits using a multimodal Siamese Neural Network (mSNN) for improved user verification. The proposed mSNN network learns discriminative temporal and spatial features from the EEG signals using an EEG encoder and from the offline signatures using an image encoder. Features of the two encoders are fused into a common feature space for further processing. A Siamese network then employs a distance metric based on the similarity and dissimilarity of the input features to produce the verification results. The proposed model is evaluated on a dataset of 70 users, comprised of 1400 unique samples. The novel mSNN model achieves a 98.57% classification accuracy with a 99.29% True Positive Rate (TPR) and False Acceptance Rate (FAR) of 2.14%, outperforming the current state-of-the-art by 12.86% (in absolute terms). This proposed network architecture may also be applicable to the fusion of other neurological data sources to build robust biometric verification or diagnostic systems with limited data size.
Original language | English |
---|---|
Pages (from-to) | 17-27 |
Number of pages | 11 |
Journal | Information Fusion |
Volume | 71 |
Early online date | 19 Jan 2021 |
DOIs | |
Publication status | Published - 1 Jul 2021 |
Bibliographical note
Funding Information:Prof Chang’s research is partly supported by VC Research (number: VCR 0000050 ). Dr. Scheme’s research is partly supported by the New Brunswick Innovation Foundation, Canada .
Publisher Copyright:
© 2021
Copyright:
Copyright 2021 Elsevier B.V., All rights reserved.