A neuro-inspired visual tracking method based on programmable system-on-chip platform
dc.contributor.author | Yang, Shufan | |
dc.contributor.author | Wong-Lin, KongFatt | |
dc.contributor.author | Andrew, James | |
dc.contributor.author | Mak, Terrence | |
dc.contributor.author | McGinnity, T. Martin | |
dc.date.accessioned | 2017-03-07T11:50:51Z | |
dc.date.available | 2017-03-07T11:50:51Z | |
dc.date.issued | 2017-01-20 | |
dc.identifier.citation | Yang, S., Wong-Lin, K., Andrew, J. et al. Neural Comput & Applic (2018) 30: 2697. https://doi.org/10.1007/s00521-017-2847-5 | |
dc.identifier.issn | 0941-0643 | |
dc.identifier.doi | 10.1007/s00521-017-2847-5 | |
dc.identifier.uri | http://hdl.handle.net/2436/620401 | |
dc.description.abstract | Using programmable system-on-chip to implement computer vision functions poses many challenges due to highly constrained resources in cost, size and power consumption. In this work, we propose a new neuro-inspired image processing model and implemented it on a system-on-chip Xilinx Z702c board. With the attractor neural network model to store the object’s contour information, we eliminate the computationally expensive steps in the curve evolution re-initialisation at every new iteration or frame. Our experimental results demonstrate that this integrated approach achieves accurate and robust object tracking, when they are partially or completely occluded in the scenes. Importantly, the system is able to process 640 by 480 videos in real-time stream with 30 frames per second using only one low-power Xilinx Zynq-7000 system-on-chip platform. This proof-of-concept work has demonstrated the advantage of incorporating neuro-inspired features in solving image processing problems during occlusion. | |
dc.language.iso | en | |
dc.publisher | Springer | |
dc.relation.url | http://link.springer.com/10.1007/s00521-017-2847-5 | |
dc.subject | Visual object tracking | |
dc.subject | Mean-shift | |
dc.subject | Level set | |
dc.subject | Attractor neural network model | |
dc.subject | Occlusion | |
dc.subject | System-on-chip | |
dc.title | A neuro-inspired visual tracking method based on programmable system-on-chip platform | |
dc.type | Journal article | |
dc.identifier.journal | Neural Computing and Applications | |
dc.date.accepted | 2017-01-10 | |
rioxxterms.funder | University of Wolverhampton | |
rioxxterms.identifier.project | UOW070317SY | |
rioxxterms.version | AM | |
rioxxterms.licenseref.uri | https://creativecommons.org/licenses/by-nc-nd/4.0/ | |
rioxxterms.licenseref.startdate | 2018-01-20 | |
dc.source.volume | 30 | |
dc.source.issue | 9 | |
dc.source.beginpage | 2697 | |
dc.source.endpage | 2708 | |
refterms.dateFCD | 2018-10-19T09:28:38Z | |
refterms.versionFCD | AM | |
refterms.dateFOA | 2018-01-20T00:00:00Z | |
html.description.abstract | Using programmable system-on-chip to implement computer vision functions poses many challenges due to highly constrained resources in cost, size and power consumption. In this work, we propose a new neuro-inspired image processing model and implemented it on a system-on-chip Xilinx Z702c board. With the attractor neural network model to store the object’s contour information, we eliminate the computationally expensive steps in the curve evolution re-initialisation at every new iteration or frame. Our experimental results demonstrate that this integrated approach achieves accurate and robust object tracking, when they are partially or completely occluded in the scenes. Importantly, the system is able to process 640 by 480 videos in real-time stream with 30 frames per second using only one low-power Xilinx Zynq-7000 system-on-chip platform. This proof-of-concept work has demonstrated the advantage of incorporating neuro-inspired features in solving image processing problems during occlusion. |