Show simple item record

dc.contributor.authorAlkeem, Ebrahim Al
dc.contributor.authorYeob Yeun, Chan
dc.contributor.authorYun, Jaewoong
dc.contributor.authorYoo, Paul D
dc.contributor.authorChae, Myungsu
dc.contributor.authorRahman, Arafatur
dc.contributor.authorAsyhari, A. Taufiq
dc.date.accessioned2021-07-26T10:57:32Z
dc.date.available2021-07-26T10:57:32Z
dc.date.issued2021-06-12
dc.identifier.citationAlkeem, EA, Yeun, CY, Yun, J, Yoo, P D, et al (2021) Robust Deep Identification using ECG and Multimodal Biometrics for Industrial Internet of Things, Ad Hoc Networks, 121, Article Number 102581en
dc.identifier.issn1570-8705en
dc.identifier.doi10.1016/j.adhoc.2021.102581en
dc.identifier.urihttp://hdl.handle.net/2436/624222
dc.descriptionThis is an accepted manuscript of an article published by Elseiver in Ad Hoc Networks on 12/06/2021, available online: https://doi.org/10.1016/j.adhoc.2021.102581 The accepted version of the publication may differ from the final published version.en
dc.description.abstractThe use of electrocardiogram (ECG) data for personal identification in Industrial Internet of Things can achieve near-perfect accuracy in an ideal condition. However, real-life ECG data are often exposed to various types of noises and interferences. A reliable and enhanced identification method could be achieved by employing additional features from other biometric sources. This work, thus, proposes a novel robust and reliable identification technique grounded on multimodal biometrics, which utilizes deep learning to combine fingerprint, ECG and facial image data, particularly useful for identification and gender classification purposes. The multimodal approach allows the model to deal with a range of input domains removing the requirement of independent training on each modality, and inter-domain correlation can improve the model generalization capability on these tasks. In multitask learning, losses from one task help to regularize others, thus, leading to better overall performances. The proposed approach merges the embedding of multimodality by using feature-level and score level fusions. To the best of our understanding, the key concepts presented herein is a pioneering work combining multimodality, multitasking and different fusion methods. The proposed model achieves a better generalization on the benchmark dataset used while the feature-level fusion outperforms other fusion methods. The proposed model is validated on noisy and incomplete data with missing modalities and the analyses on the experimental results are provided.en
dc.description.sponsorshipThis work is supported in part by the Center for Cyber-Physical Systems, Khalifa University, under Grant Number 8474000137-RC1-C2PS-T3. The authors declare no conflict of interest.en
dc.formatapplication/pdfen
dc.language.isoenen
dc.publisherElsevieren
dc.relation.urlhttps://www.sciencedirect.com/science/article/abs/pii/S1570870521001219?via%3Dihuben
dc.subjectPersonal identificationen
dc.subjectmultimodal biometricsen
dc.subjectDeep Learningen
dc.subjectgender classificationen
dc.subjectelectrocardiogramen
dc.subjectfingerprinten
dc.subjectface recognitionen
dc.subjectfeature-level fusionen
dc.titleRobust deep identification using ECG and multimodal biometrics for industrial internet of thingsen
dc.typeJournal articleen
dc.identifier.eissn1570-8705
dc.identifier.journalAd Hoc Networksen
dc.date.updated2021-07-21T13:20:11Z
dc.identifier.articlenumber102581
dc.date.accepted2021-06-09
rioxxterms.funderThe University of Wolverhamptonen
rioxxterms.identifier.projectNAen
rioxxterms.versionAMen
rioxxterms.licenseref.urihttps://creativecommons.org/licenses/by-nc-nd/4.0/en
rioxxterms.licenseref.startdate2022-06-12en
dc.source.volume121
refterms.dateFCD2021-07-26T10:56:08Z
refterms.versionFCDAM


Files in this item

Thumbnail
Name:
Publisher version
Thumbnail
Name:
Rahman_Robust_Deep_Identificat ...
Size:
3.225Mb
Format:
PDF

This item appears in the following Collection(s)

Show simple item record

https://creativecommons.org/licenses/by-nc-nd/4.0/
Except where otherwise noted, this item's license is described as https://creativecommons.org/licenses/by-nc-nd/4.0/